100 results (0.034 seconds)
We develped DiscoGrad, a tool for automatic differentiation through C++ programs involving input-dependent control flow (e.g., "if (f(x) < c) { ... }", differentiating wrt. x) and randomness. Our initial motivation was to enable the use of gradient descent with simulations, which often rely heavily on such discrete branching. The latter makes plain autodiff mostly useless, since it can only account for the single path taken through the program. Our tool offers several backends that handle this situation, giving useful descent directions for optimization by accounting for alternative branches. Besides simulations, this problem arises in many other places, for example in deep learning when trying to combine imperative programs with neural networks.
In a nutshell, DiscoGrad applies an (LLVM-based) source-to-source transformation to your C++ program, adding some calls to our header library, which then handles the gradient computation. What sets it apart from similar tools/estimators is that it's fully automatic (no need to come up with a differentiable problem formulation/reparametrization) and that the branching condition can be any function of the program inputs (no need to know upfront what distribution the condition follows).
We're currently a team of two working on DiscoGrad as part of a research project, so don't expect to see production-grade code quality, but we do intend for it to be more than a throwaway research prototype. Use cases we've successfully tested include calibrating simulation models of epidemics or evacuation scenarios via gradient descent, and combining simulations with neural networks in an end-to-end trainable fashion.
We hope you find this interesting and useful, and we're happy to answer questions!
Here's the repo: https://github.com/streamdal/streamdal
Here's the site: https://streamdal.com
And here's a live demo: https://demo.streamdal.com (github repo has an explanation of the demo)
— — —
THE PROBLEM
We built this because the current observability tooling is not able to provide real-time insight into the actual data that your software is reading or writing. Meaning that it takes longer to identify issues and longer to resolve them. That’s time, money, and customer satisfaction at stake.
Want to build something in-house? Prepare to deploy a team, spend months of development time, and tons of money bringing it to production. Then be ready to have engineers around to babysit your new monitoring tool instead of working on your product.
— — —
THE BASIC FLOW
So, wtf is a “tail -f for your data”. What we mean is this:
1. We give you an SDK for your language, a server, and a UI.
2. You instrument your code with `StreamdalSDK.Process(yourData)` anytime you read or write data in your app.
3. You deploy your app/service.
4. Go to the provided UI (or run the CLI app) and be able to peek into what your app is reading or writing, like with `tail -f`.
And that's basically it. There's a bunch more functionality in the project but we find this to be the most immediately useful part. Every developer we've shown this to has said "I wish I had this at my gig at $company" - and we feel exactly the same. We are devs and this is what we’ve always wanted, hundreds of times - a way to just quickly look at the data our software is producing in real-time, without having to jump through any hoops.
If you want to learn more about the "why" and the origin of this project - you can read about it here: https://streamdal.com/manifesto
— — —
HOW DOES IT WORK?
The SDK establishes a long-running session with the server (using gRPC) and "listens" for commands that are forwarded to it all the way from the UI -> server -> SDK.
The commands are things like: "show me the data that you are currently consuming", "apply these rules to all data that you produce", "inspect the schema for all data", and so on.
The SDK interprets the command and either executes Wasm-based rules against the data it's processing or if it's a `tail` request - it'll send the data to the server, which will forward it to the UI for display.
The SDK IS part of the critical path but it does not have a dependency on the server. If the server is gone, you won't be able to use the UI or send commands to the SDKs, but that's about it - the SDKs will continue to work and attempt to reconnect to the server behind the scenes.
— — —
TECHNICAL BITS
The project consists of a lot of "buzzwordy" tech: we use gRPC, grpc-Web, protobuf, redis, Wasm, Deno, ReactFlow, and probably a few other things.
The server is written in Go, all of the Wasm is Rust and the UI is Typescript. There are SDKs for Go, Python, and Node. We chose these languages for the SDKs because we've been working in them daily for the past 10+ years.
The reasons for the tech choices are explained in detail here: https://docs.streamdal.com/en/resources-support/open-source/
— — —
LAST PART
OK, that's it. What do you think? Is it useful? Can we answer anything?
- If you like what you're seeing, give our repo a star: https://github.com/streamdal/streamdal
- And If you really like what you're seeing, come talk to us on our discord: https://discord.gg/streamdal
Talk soon!
- Daniel & Ustin
It's like a CMS but instead of only letting you set static content, you can insert arbitrary logic from the UI, including A/B tests and ML "loops".
I previously built a landing page optimization tool that let marketers define variants of their headline, CTA, cover image, etc, then used a genetic algorithm to find the best combination of them. They used my Chrome extension to define changes on DOM elements based on their unique CSS selector. But this broke when the underlying page changed and didn't work with sites that used CSS modules. Developers hated it.
I took a step back.
The problem I was trying to solve was making the page configurable by marketers in a way that developers liked. I decided to solve it from first principles and this led to Hypertune.
Here's how it works. You define a strongly typed configuration schema in GraphQL, e.g.
type Query {
page(language: Language!, deviceType: DeviceType!): Page!
}
type Page {
headline: String!
imageUrl: String!
showPromotion: Boolean!
benefits: [String!]!
}
enum Language { English, French, Spanish }
enum DeviceType { Desktop, Mobile, Tablet }
Then marketers can configure these fields from the UI using our visual, functional, statically-typed language. The language UI is type-directed so we only show expression options that satisfy the required type of the hole in the logic tree. So for the "headline" field, you can insert a String expression or an If / Else expression that returns a String. If you insert the latter, more holes appear. This means marketers don't need to know any syntax and can't get into invalid states. They can use arguments you define in the schema like "language" and "deviceType", and drop A/B tests and contextual multi-armed bandits anywhere in their logic. We overlay live counts on the logic tree UI so they can see how often different branches are called.
You get the config via our SDK which fetches your logic tree once on initialization (from our CDN) then evaluates it locally so you can get flags or content with different arguments (e.g. for different users) immediately with no network latency. So you can use the SDK on your backend without adding extra latency to every request, or on the frontend without blocking renders. The SDK includes a command line tool that auto-generates code for end-to-end type-safety based on your schema. You can also query your config via the GraphQL API.
If you use the SDK, you can also embed a build-time snapshot of your logic tree in your app bundle. The SDK initializes from this instantly then fetches the latest logic from the server. So it'll still work in the unlikely event the CDN is down. And on the frontend, you can evaluate flags, content, A/B tests, personalization logic, etc, instantly on page load without any network latency, which makes it compatible with static Jamstack sites.
I started building this for landing pages but realized it could be used for configuring feature flags, in-app content, translations, onboarding flows, permissions, rules, limits, magic numbers, pricing plans, backend services, cron jobs, etc, as it's all just "code configuration".
This configuration is usually hardcoded, sprawled across json or yaml files, or in separate platforms for feature flags, content management, A/B testing, pricing plans, etc. So if a PM wants to A/B test new onboarding content, they need a developer to write glue code that stitches their A/B testing tool with their CMS for that specific test, then wait for a code deployment. And at that point, it may not be worth the effort.
The general problem with having separate platforms is that all this configuration naturally overlaps. Feature flags and content management overlap with A/B testing and analytics. Pricing plans overlap with feature flags. Keeping them separate leads to inflexibility and duplication and requires hacky glue code, which defeats the purpose of configuration.
I think the solution is a flexible, type-safe code configuration platform with a strongly typed schema, type-safe SDKs and APIs, and a visual, functional, statically-typed language with analytics, A/B testing and ML built in. I think this solves the problem with having separate platforms, but also results in a better solution for individual use cases and makes new use cases possible.
For example, compared specifically to other feature flag platforms, you get auto-generated type-safe code to catch flag typos and errors at compile-time (instead of run-time), code completion and "find all references" in your IDE (no figuring out if a flag is in kebab-case or camelCase), type-safe enum flags you can exhaustively switch on, type-safe object and list flags, and a type-safe logic UI. You pass context arguments like userId, email, etc, in a type-safe way too with compiler errors if you miss or misspell one. To clean up a flag, you remove it from your query, re-run code generation and fix all the type errors to remove all references. The full programming language under the hood means there are no limits on your flag logic (you're not locked into basic disjunctive normal form). You can embed a build-time snapshot of your flag logic in your app bundle for guaranteed, instant initialization with no network latency (and keep this up to date with a commit webhook). And all your flags are versioned together in a single Git history for instant rollbacks to known good states (no figuring out what combination of flag changes caused an incident).
There are other flexible configuration languages like Dhall (discussed here: https://news.ycombinator.com/item?id=32102203), Jsonnet (discussed here: https://news.ycombinator.com/item?id=19656821) and Cue (discussed here: https://news.ycombinator.com/item?id=20847943). But they lack a UI for nontechnical users, can't be updated at run-time and don't support analytics, A/B testing and ML.
I was actually going to start with a basic language that had primitives (Boolean, Int, String), a Comparison expression and an If / Else. Then users could implement the logic for each field in the schema separately.
But then I realized they might want to share logic for a group of fields at the object level, e.g. instead of repeating "if (deviceType == Mobile) { primitiveA } else { primitiveB }" for each primitive field separately, they could have the logic once at the Page level: "if (deviceType == Mobile) { pageObjectA } else { pageObjectB }". I also needed to represent field arguments like "deviceType" in the language. And I realized users may want to define other variables to reuse bits of logic, like a specific "benefit" which appears in different variations of the "benefits" list.
So at this point, it made sense to build a full, functional language with Object expressions (that have a type defined in the schema) and Function, Variable and Application expressions (to implement the lambda calculus). Then all the configuration can be represented as a single Object with the root Query type from the schema, e.g.
Query {
page: f({ deviceType }) =>
switch (true) {
case (deviceType == DeviceType.Mobile) =>
Page {
headline: f({}) => "Headline A"
imageUrl: f({}) => "Image A"
showPromotion: f({}) => true
benefits: f({}) => ["Ben", "efits", "A"]
}
default =>
Page {
headline: f({}) => "Headline B"
imageUrl: f({}) => "Image B"
showPromotion: f({}) => false
benefits: f({}) => ["Ben", "efits", "B"]
}
}
}
So each schema field is implemented by a Function that takes a single Object parameter (a dictionary of field argument name => value). I needed to evaluate this logic tree given a GraphQL query that looks like:
query {
page(deviceType: Mobile) {
headline
showPromotion
}
}
So I built an interpreter that recursively selects the queried parts of the logic tree, evaluating the Functions for each query field with the given arguments. It ignores fields that aren't in the query so the logic tree can grow large without affecting query performance.
The interpreter is used by the SDK, to evaluate logic locally, and on our CDN edge server that hosts the GraphQL API. The response for the example above would be:
{
"__typename": "Query",
"page": {
"__typename": "Page",
"headline": "Headline A",
"showPromotion": true
}
}
Developers were concerned about using the SDK on the frontend as it could leak sensitive configuration logic, like lists of user IDs, to the browser.
To solve this, I modified the interpreter to support "partial evaluation". This is where it takes a GraphQL query that only provides some of the required field arguments and then partially evaluates the logic tree as much as possible. Any logic which can't be evaluated is left intact.
The SDK can leverage this at initialization time by passing already known arguments (e.g. the user ID) in its initialization query so that sensitive logic (like lists of user IDs) are evaluated (and eliminated) on the server. The rest of the logic is evaluated locally by the SDK when client code calls its methods with the remaining arguments. This also minimizes the payload size sent to the client and means less logic needs to be evaluated locally, which improves both page load and render performance.
The interpreter also keeps a count of expression evaluations as well as events for A/B tests and ML loops, which are flushed back to Hypertune in the background to overlay live analytics on the logic tree UI.
It's been a challenge to build a simple UI given there's a full functional language under the hood. For example, I needed to build a way for users to convert any expression into a variable in one click. Under the hood, to make expression X a variable, we wrap the parent of X in a Function that takes a single parameter, then wrap that Function in an Application that passes X as an argument. Then we replace X in the Function body with a reference to the parameter. So we go from:
if (X) {
Y
} else {
Z
}
to
((paramX) =>
if (paramX) {
Y
} else {
Z
}
)(X)
So a variable is just an Application argument that can be referenced in the called Function's body. And once we have a variable, we can reference it in more than one place in the Function body. To undo this, users can "drop" a variable in one click which replaces all its references with a copy of its value.
Converting X into a variable gets more tricky if the parent of X is a Function itself which defines parameters referenced inside of X. In this case, when we make X a variable, we lift it outside of this Function. But then it doesn't have access to the Function's parameters anymore. So we automatically convert X into a Function itself which takes the parameters it needs. Then we call this new Function where we originally had X, passing in the original parameters. There are more interesting details about how we lift variables to higher scopes in one click but that's for another post.
Thanks for reading this far! I'm glad I got to share Hypertune with you. I'm curious about what use case appeals to you the most. Is it type-safe feature flags, in-app content management, A/B testing static Jamstack sites, managing permissions, pricing plans or something else? Please let me know any thoughts or questions!
Pulled master branch, no build errors, Angular isn't complaining but I know the problem is in the UI side, probably with graphQL (angular apollo). I cannot get a single VSCode extension to work to help debug it so a junior is going through and commenting out code they changed to try to find it.
There is no way this project is done on time and the entire software department is looking at it. I'm 1 of 3 leads on it.
Quite literally just gave myself a panic attack thinking about it and the looming deadline.
This isn't the first time; I've been dealing with this for months now. It went away for a bit (2-3 months) and now it's back. I've been drinking all kinds of clamming teas, supplements, trying to change my diet, walk more, etc...
I just can't stop them :( 99% of the time it's chest pain or back pain or etc... every once in a while it escalates into a panic attack.
I keep trying to tell myself f' it - it doesn't matter. Fire me if you want to; we're doing the best we can. But it's not sinking in...
I.do.not.know.what.to.do.anymore. Vacation? FMLA? Career change? wtf would I do if not web development (been doing it 20+ years)
help.
I'm not asking for any money for the books, but I'd appreciate if you could paypal me the shipping costs. Or if you're somewhere local in the Bay Area, we can meet in person. Also if you have better ideas for meaningful places to donate such technical books (Hacker Dojo?), let me know.
The books are listed below. Most are in very good condition. I provided the Amazon link for the ones I could find.
Thanks everyone.
(Edit: fixed the formatting)
Technical books:
1. Windows NT Device Driver Book, The: A Guide for Programmers: http://www.amazon.com/Windows-Device-Driver-Book-Programmers/dp/0131844741/ref=tmm_pap_title_1
2. Advanced Windows by Jeffrey Richter: http://www.amazon.com/Advanced-Windows-Jeffrey-Richter/dp/1572315482/ref=sr_1_1?ie=UTF8&s=books&qid=1267856585&sr=1-1
3. Handbook of Computer Communication Standard, Vol 1: The Open System Intercon Model (OSI): http://www.amazon.com/Handbook-Computer-Communication-Standard-Computer-Communications/dp/0024155217
4. The C++ Programming Language by Bjarne Stroustrup: http://www.amazon.com/C-Programming-Language-Bjarne-Stroustrup/dp/0201538644/ref=tmm_pap_title_3
5. Gigabit Networking by Craig Partridge: http://www.amazon.com/Gigabit-Networking-Craig-Partridge/dp/0201563339/ref=sr_1_1?ie=UTF8&s=books&qid=1267858462&sr=1-1
6. Networking Software by Colin B. Ungaro: http://www.amazon.com/Networking-Software-Mcgraw-Hill-Data-Communications/dp/0076069699/ref=sr_1_1?ie=UTF8&s=books&qid=1267858495&sr=1-1
7. Inside Windows NT by Helen Custer: http://www.amazon.com/Inside-Windows-Network-Helen-Custer/dp/155615481X/ref=tmm_pap_title_0
8. Operating System Concepts, 6th Edition
by Abraham Silberschatz, Peter Baer Galvin, Greg Gagne:
http://www.amazon.com/Operating-System-Concepts-Abraham-Silberschatz/dp/0471417432/ref=tmm_hrd_title_5
9. Problem Solving With C++: The Object of Programming by Walter J. Savitch:
http://www.amazon.com/Problem-Solving-C-Object-Programming/dp/0201357496/ref=tmm_pap_title_0
10. ATM Foundations for Broadband Networks by Uyless Black
Non-technical books:
11. Daily Reflections for Highly Effective People: Living the 7 Habits of Highly Effective People Every Day by Stephen R. Covey: http://www.amazon.com/Daily-Reflections-Highly-Effective-People/dp/0671887173/ref=sr_1_1?ie=UTF8&s=books&qid=1267857292&sr=1-1
12. The Trial of Socrates by Irving Stone: http://www.amazon.com/Trial-Socrates-I-F-Stone/dp/0385260326/ref=sr_1_1?ie=UTF8&s=books&qid=1267857423&sr=1-1
13. 301 Great Management Ideas: From America's Most Innovative Small Companies by Bradford Ketchum Jr.: http://www.amazon.com/301-Great-Management-Ideas-Innovative/dp/1880394219/ref=sr_1_1?ie=UTF8&s=books&qid=1267856202&sr=1-1
14. After the Merger: The Authoritative Guide for Integration Success, Revised Edition: http://www.amazon.com/After-Merger-Authoritative-Integration-Success/dp/0786312394/ref=sr_1_1?ie=UTF8&s=books&qid=1267856752&sr=1-1
15. Beyond the Summit: Setting and Surpassing Extraordinary Business Goals: http://www.amazon.com/Beyond-Summit-Surpassing-Extraordinary-Business/dp/159184004X/ref=tmm_hrd_title_1
16. Psychology and Personal Growth (2nd Edition) by Abe Arkoff
17. Negotiating Game: How to Get What You Want by Chester L. Karrass
18. Your Erroneous Zones by Wayne W. Dyer
19. You Can Negotiate Anything by Herb Cohen
Having lived and studied in three different cultures in different parts of the world I can say that, from my experience, grading in the US and abroad is pretty much the same. Other parts of the world might use a 0 to 10 scale while US schools like the A through F symbols. In the end the result is the same: You are grading test-taking performance, not knowledge.
Any entrepreneur knows full-well that test performance does not matter. How many times have you failed before succeeding? Failure is a part of learning. Some might argue that failure is the start of learning.
Yet another example is that of a skilled athlete, say, a gymnast. It might take weeks or months of failure to master one particular move that lasts barely past one second. After hundreds of failures the gymnast gets it right and it is part of their skill set. Countless other examples abound.
The problem with test-style grading is that it does not encode acquired knowledge. It encodes isolated and aggregate test scores. And, to add further insult to injury, you get to average test scores at the end of the year, semester or quarter to get your grade. The resulting number is meaningless.
Imagine that we applied this to the aforementioned gymnast. They would get grades below 5 or 6 most of the time due to constant failures while learning. Then, once they figure it out they start getting 10's. Well, if you average hundreds of scores at 5 or below with a few dozen 10's you'll get an average score that does not represent what this person can now actually do. And therein lies the problem.
So, I've been looking at grading systems that allow one to place greater weight on success when compared to failure in order to encode acquired knowledge rather than average test scores.
I'd be interested to learn from others who may have traveled this road and what they may have learned in the process.
An even greater topic is how to go about changing the way schools grade for the benefit of all involved.
Thank you,
-Martin
PM: x@y.z
where x = martin
y = algoshift
z = com
You can install from a different repository of apps such as F-droid or any other FOSS repository. But that means you must trust the managers of that repository who are building the apps from source code with their own keys which means they have the opportunity to modify it in secret, maybe adding some module, and we wouldn't know. So we would have the same problem as we have with Google Play Store but now we shift the trust problem to this other repository's developers.
You can use another frontend to Play Store but we will have to trust Google, it just means we dont need a Google account. Also we need to trust Aurora store if we install with APK file because the devs could add some malicious module before building the apk file, so we won't know that by looking at the source code they have published. And if we install Aurora store from F-droid then we have to trust F-droid instead. So same trust problem no matter what.
We can also choose to just trust the developers of each app by installing the apps directly from the devs by downloading the APK file but then the trust problem is moved to each developer instead. We don't know if they are adding malicious code before building.
So no matter how we install apps, we will never know really, there's always a trust problem.
Maybe the best thing to do is to isolate each app as much as possible. Create a different user profile for each app. And then decide on a case by case basis who is most trustworthy when it comes to which method of installing the app, but no matter what there will be a trust problem.
Or do i have something wrong or missing something?
Chicago Drops Expensive WiFi Project http://www.tmcnet.com/wifirevolution/articles/10205-chicago-...
S.F. citywide Wi-Fi plan fizzles as provider backs off
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2007/08/30/...
Anybody have any ideas how to create a breakthrough solution? Seems there are numerous supply of wifi connections, but they all need passcodes.
I want to be able log on to any WiFi in San Francisco even if I don't have the passcode... I don't want to use a hacker tool, and I want it to be fair to everyone. What's the problem with doing that?
The thing is, if you get REAL lucky, you MIGHT just make it through to the hold queue after all. There's a slight variation in the way the pre-recorded message plays out in the first minute or two if you happen to make it through, but it's all based on unknowable and unpredictable call volume from potentially hundreds of thousands of other people all at the same time and the line is only open M-F 8-5.
The only way to solve a problem with this agency in my case is to reach a human being with this agency and this is the only way to do it. Problem is, it's nearly impossible to get into the hold queue. I've called them, brute-force style, over a dozen times over and over again and been hung up on after the pre-recorded crap every single time. The only way to get into the hold queue is to get lucky. Which means I need a way to automate this thing for brute force so I can finally get a human.
Anyway, I've got an iPhone 12. I'd rather not have to sit here and dial, wait, hang up, dial, wait, hang up, etc. all day long trying to brute force my way into a hold queue to get to a human being, which is unfortunately the only possible way to solve my problem with this particular state government agency.
I'm not aware of any programmable phone technology interface that I could use as a one-off to automate my iPhone's phone interface for something like this, hence why I'm asking HN. Would you folks happen to know of a tool I could use for something like this? Doesn't need to be anything super formal, certainly not looking to build a product, and it only needs to dial ONE number. It just needs to be able to listen for a pattern and hang up and repeat if it hears that exact same pattern, and if it hears a variation from that, NOT hang up. Ideally something I can script from my MacBook Pro to control my iPhone, then put the thing on speaker while I go off and do more important tasks, like clean the kitchen or something.
I'm chasing down non-technology options (state senator's office) but in the meantime, I'd like to see if there's a way I can automate brute forcing my way through their broken phone system. Anyone know of any scriptable way to do this? Thank you!
how "groups" are depicted in public is internalized, privately. "hmmm, everyone in that commercial looks like me / doesn't look like me. i think i'll buy that product / won't buy that product."
it's like the problem with public bathroom signs. and false choices. why not instead use the letters "M" "F" "U". Male, Female, Unisex. There. We can embrace kilts and convenience. (and if that doesn't work, there are other options. i'm sure we can come to some agreement that doesn't require equating gender with "my favorite color is..." or "most people of gender x/y (pun totally intended) wear...")
so, is it just me? or is this icon sexist? (even a littel?) you decide.
Everyone has a rough idea of what is true based on their experience and interpretation of available evidence. However, it is hard to assess how true something really is because different people have different experiences and interpretations of the world around them, and because the evidence – or one’s interpretation of that evidence – can change at any moment.
Marqt.org attempts to solve this problem by aggregating the wisdom of the crowd to arrive at a quantitative measure of how true something is, and how it might change, in real-time. The project imagines what it would be like if you built an open-source knowledge base like Wikipedia using the format of Twitter and verified everything with Stack Overflow.
I initially announced Marqt.org several weeks ago here:
https://news.ycombinator.com/item?id=35814313
After hearing from some of you, I have reassessed my assumptions and made some changes that I hope will improve the system and encourage you to try it out.
* Privacy and anonymity
Against conventional wisdom, I intentionally did not install analytics or tools that track user behavior from the start. I am leaning into this further, allowing you to use the full feature set anonymously. I still do encourage you to sign up for an account though, and there are now further privacy protections for those that do. You can read the details of how I do that in the comment below.
* Dialing in how true or false a statement is
Truth is not black and white, and while the purpose of Marqt.org is to navigate the nuances around partial truths (by saying something can be 68% true, for example), forcing users to take a binary position to get there doesn’t really seem fair. So now, instead of just being able to “marq it” true or false, you can dial in how true or false a marqt is. This also allows abstaining, by setting your marq to 50% (it will snap to 50 from 45-55). It also somewhat acts as an “undo,” if you marq a marqt by accident. Hotkeys still allow you to go fully true or false with “j” and “k” (or "t/f"), but now you can also hit “n” to go neutral.
* It is easier to evaluate than create
Adding a remarq puts a lot of pressure on “getting it right” and being comprehensive. It also requires work and time and is something that generative AI can do easily. So now, when you make a marqt, remarqs that argue each side are auto-generated, hopefully providing a contextual base to critically engage with. Then you can upvote or downvote the remarqs based on how remarqable or unremarqable they are, and add a remarq yourself if you want to add to the conversation.
* Leaning into the subjectivity of truth
I still don’t know what makes a good marqt yet. It took the internet some time to figure out what a good tweet is, so I’m hoping the same can happen with Marqt.org. One of the core assumptions that the marqt is built on is around the subjectivity of truth, and so I’ve front-loaded some marqts that lean more into individual experience ["I am skeptical of most things I see.", "I feel optimistic about the future of humanity.", "I am confident my job cannot be replaced by AI.",] (see below for how marqts are sorted).
Lastly, I concede that this project can be seen as a naive idea built on quixotic fantasies, but I genuinely believe that we can solve the problem of misinformation in the age of LLM hallucinations, social echo chambers and media bias, and I sincerely hope that you and Marqt.org can be a part of that eventual solution.
arthur@marqt.org
But a worrying habit these days, as more and more software gets sold/distributed through app stores and "integrated" solutions like Steam is that a majority of the feedback comes through the "rating" system only, no way of contacting a real human.
So, when I have a problem with a software, I'm usually lost with searching on Google, where for most of queries crap sites like Quora appear where the content is visible only after a registration, or when it's driver/DLL related, a bunch of scammer sites are the first three Google pages and only then a blog entry written three years ago comes up where someone encountered the same DLL-problem with the software I use and posts a solution.
With public open-source software, the bugtrackers are mostly without single-sign-on (when a platform exists which provides SSO for the service, like a game account or in case of Wikimedia's Mediawiki, the Wikimedia SSO) - why?! WHY DO I HAVE TO GO THE EXTRA STEP AND REGISTER AN ACCOUNT TO REPORT SOMETHING YOU F..D UP?!
Or, on bugtrackers where there's no backend SSO architecture, there's no way to sign in using FB, Twitter or any other OAuth provider. WHY?
What is it what keeps YOU or YOUR COMPANY from providing a public bug tracker? After all, you're likely to have an internal bug tracker for the dev team already, so why don't provide a "public" one linked to the internal one?
After all, if customers can help each other (or even you as the developer) you saved money on support and developer time...
Bruno here. I’m not a developer, but I’ve spent 15+ years in the photography industry as a camera/lens reviewer, photojournalist, and consultant for major brands.
One of the most frequent (and frustrating) questions photographers ask is:
“Will this lens work on this camera?”
You’d think this would be a solved problem in the AI era... but it’s not. There has never been a structured, global dataset for cross-system lens compatibility.
---
### Why?
The data exists, but it’s fragmented—buried in *paper archives, PDF manuals, and old forums from the Dot-Com era*.
Compatibility isn’t just about mount fit: there are flange distance constraints, electronic protocols, AF & aperture limitations, and third-party adapters.
*Manufacturers (Canon, Nikon, Sony, etc.) have attempted to document their own systems—but often with errors, omissions, and without ever considering intercompatibility between brands.
No one has ever attempted to unify this knowledge in an open, structured way.
---
### So after Christmas, I decided to teach myself coding and build what nobody else had:
CLIC – Camera & Lens Interoperability Checker
Covers 71 manufacturers, 106 mounts, 1347 cameras, and 4039 lenses (for now).
Detects native compatibility + third-party adapter solutions.
Calculates theoretical matchups (~2.1M possible combinations).
Database grows daily—based on historical archives and community contributions.
---
### Challenges:
Structuring compatibility logic in a scalable way (mechanical fit, adapters, electronic limitations).
Extracting reliable data from disparate sources (manufacturers' PDFs, old forum posts, literal books).
Managing edge cases (e.g., Nikon F mount has ~10 sub-versions* with different behaviors).
---
### Try it here → https://www.fotocaz.com/
Still a work in progress, but I’d love feedback from HN on:
Eliminating false positives to ensure the tool provides photographically accurate results, not just technically correct ones. (There’s what the code says, and what mechanical + optical constraints allow.)
Better ways to structure the dataset (currently using JSON and Vanilla JS… but it won’t scale easily).
Efficient search/filtering for 2M+ potential matchups (and this problem can only grow as I add more lenses, cameras, mounts and adapters, because in 100 years, manufacturers had plenty of time to go wild).
How to handle cases where official compatibility info is missing or uncertain?
I’m here to learn from you and improve this tool.
Thoughts?
First, a little background. I'm currently developing an application to help students write essays. It was a bit of a selfish endeavour as I wanted a free app to outline my essays with and I couldn't find any, so I decided to write my own.
Currently I'm using awt and Swing, but the problem is that the widget library seems too limited to do anything more advanced than what I'm currently doing. The lack of fully native widgets also throws the polish off a bit; and the native L&F with Swing still isn't as fully "native" as it should be.
Should I continue using Swing merely because it's easier to work with? Or should I switch to GTK+ or Qt for a better and more comprehensive experience?
Maybe I should just ditch Java and switch to Objective-C or C++.
Thoughts?
In essence, DiscoGrad applies an (LLVM-based) source-to-source transformation to your C++ program that adds some calls to our header library, which then handles the gradient calculation. The main difference to similar tools/estimators is that it's fully automatic (no need to come up with a differentiable problem formulation/reparametrization) and that the branch condition can be any function of the program inputs (no need to know upfront what distribution the condition follows).
We're currently a team of two working on DiscoGrad as part of a research project, so don't expect production-grade code quality, but we do intend for it to be more than a throwaway research prototype. Use cases we've successfully tested include calibrating simulation models of epidemics or evacuation scenarios via gradient descent, and combining simulations with neural networks.
We hope you find this interesting/useful and are happy to answer questions!
(1) There is a lot of talk about developing and delivering more in social media. The more might be from things new in each of data sources, data manipulation techniques, Web sites, or companies. There might be more in just the content or in social search to find such content.
(2) There is not much clarity about just how to have more in social media or just why to have it. For the "why", what users want it, and what would they do with it?
Here are two examples of some of the recent talk about social media:
First, here on HN is the thread:
"Sergey Brin: We’ve Touched 1 Percent Of What Social Search Can Be (techcrunch.com)"
at
http://techcrunch.com/2011/01/20/sergey-brin-weve-touched-1-percent-of-what-social-search-can-be/
Second, is the thread "Building Better Social Graphs" at Fred Wilsons blog A VC* at:
http://www.avc.com/a_vc/2011/01/building-better-social-graphs.html#disqus_thread
with a lot of relatively good relevant comments.
So, let's dig in a little: We can start with being more clear about our terminology. There are three terms:
(1) Social Graph:
So, in applied math, the relevant definition of a graph is a collection of arcs and nodes with each node much like a geometric point and each arc a connection between two nodes (usually distinct but sometimes the same). An arc may be directed or not: A directed arc is drawn with an arrowhead and is regarded as a one-way street.
In a social graph, each node is likely a person (or maybe the blog of a person) but might be a group of people. The people might be users of Facebook, Twitter, etc. or just people who don't use the Internet. Then an arc might represent friend on Facebook, a follower on Twitter, or some such.
(2) Social Media:
Examples would include Facebook, Twitter, etc., maybe even HN. Maybe a definition of social media would be some Web site where people interact.
(3) Social Search:
Given data from social graphs and/or social media, one could do searches of that data.
To continue: There can be a problem with the concept of a social graph: Too commonly it is left unclear just what the arcs mean!
So, consider a person's social graph: They may have an arc to each person (A) they went to high school with, (B) live on the same street as, (C) went to college with, (D) dated, (E) married, (F) worked at the same company as, (G) hired to plow snow from their driveway, (H) got their business card at a Consumer Electronics Show in Las Vegas, etc. So, the point of these examples is that there is enormous variety on what the arcs can mean.
So, to make progress, maybe usually we should be more clear on what the arcs mean!
Now to something more substantive:
In Fred Wilson's thread "Building Better Social Graphs", there were two strikingly different themes:
First, Wilson started the thread with a post where he wanted to be able to download each of his social graphs and then curate them himself.
Second, in the comments, the theme was strong that given the data on the social graphs, we should have computer-based means to process the data for curation, etc. Curiously, the goal of this processing was Wilson's "Building Better Social Graphs" by stronger means than just Wilson's manual curation! That is, Wilson's title was stronger than Wilson's post, and the comments were closer to the title than the post was!
So, from 40,000 feet up, it appears that many people have some vague, ill-defined, intuitive, poorly identified and articulated visions of making progress with social graphs. Each of the broad subject of social, Facebook, Twitter, and the Internet, is so big that we should take the visions, as crude as they are, seriously.
So, there are a lot of people (e.g., 500+ million users of Facebook, etc.); each such person has one or more social graphs; somehow there should be some value in that data; we might process the data automatically to obtain some useful results; and the data might be good for doing related searching. Yes, for an ad supported Web site, the data might also be good for ad targeting!
Again, we should not miss the likely importance: From higher than 40,000 feet up, for people, networking has long been very important. E.g., about 120 years ago, some wealthy families built some large, expensive houses in Newport, RI. Why? So, in the summers the wives could hold big parties, network, and have their daughters meet good (i.e., mostly just wealthy) husband candidates! Sometimes even the husbands would get on a private railroad car in Manhattan, ride up to Newport, and show up at the party for a few minutes before going upstairs to read a book or play poker with the other bored husbands! E.g., in careers, long a common remark has been, "It's not what you know but who you know.". Well, the Internet is the biggest network of them all; it can in effect reduce geographic distances to zero; it has the power of computers and software to process data, and it might become much more important than anything before in "who you know", etc.
Descending a little from 40,000 feet, and thinking about what software we might write, we can identify three important issues:
(1) Meaning.
An arc in a graph from the definition in applied math has essentially no meaning in any sense social or even practical. So, if we are to make use of data from social graphs, etc., then we should make some progress, if only rough, on what the arcs, or other data, mean.
(2) Purpose.
We should identify the purpose of the software. That is, what will be the output of the software, and why will users like that output? Or, what do users want, or at least would like if they saw it, that such software might provide? What the heck is the darned purpose?
We might start by articulating just what is the purpose of Facebook, Twitter, etc.! The original purpose of Facebook at Harvard was to get dates. For Twitter, maybe the purpose is to follow selected other people and, maybe, ingratiate oneself.
(3) Techniques.
Given the data, what data manipulation techniques will we have the software use to get the results good for the purposes?
I raise one more point:
The US has something over 300 million people. In some important respects, this number is not very large. E.g., it is easy enough for current computing and data base techniques to have, say, 1 million bytes on each person and still be able to store and process that data.
So, it can appear that there is a chance that we could have a single, grand solution in the space of social graphs and social search. If so, then we will guess that the present efforts in social media are only tangential or indirect solutions for a central problem not yet identified, articulated, or solved and that a single, grand solution might be possible.
So, consider roughly 1 million bytes on each person in the US. It turns out, although not discussed very openly, actually there are data bases that have a lot of data on nearly every adult in the US. So at least in principle, there is a lot of data now. Keeping this data restricted in walled gardens forever may be unreasonable.
Here is a simple example: Just take the printed phone books, type in the data (or get a DVD from someone), and sort the data on state, city, and street address. Then, given a person and their street address, a simple data base query will yield the names and telephone numbers of all of that person's neighbors. In particular, such a telephone book can provide a case of a social graph for each person in the book.
When this person makes a telephone call, the telephone records provide another social graph for this person. When this person uses a credit card, we get another social graph. Each merchant who accepts a credit card defines a social graph.
Net, our society is awash in data for social graphs from sources much more general than Facebook, Twitter, etc.
So, generally we can guess that we can approach asymptotically 1 million bytes of data on each person in the US where this data provides a fairly complete description of that person.
So, we have potentially a grand answer to the issue what data.
Then we can move on to what purposes? What will people what to do with this data?
Okay, in some broad sense, many of the uses will be introductions. Or, "I see that you also bought a Golden Retriever puppy at the North Hills Mall Puppy Store and also have a son in the third grade at North Side Grade School." Uh, nearly anyone else could devise better purpose scenarios that that!
So, for introductions, there already is an industry, that is, romantic matchmaking. So, introduction can be for more purposes than just romantic, and likely both the data and the processing techniques used could be quite similar.
So, from 40,000 feet up, the purpose of social anything and of social search may be introductions that could be accomplished by starting with available data and processing it much as in current romantic matchmaking services.
Yes, the introductions could be for many purposes -- hobbies, careers, looking for a supplier, customer, employer, employee, date, spouse, cello in a string quartet, etc.
At times in the comments on the Fred Wilson thread, it appeared that one of the purposes was just the usual one of some women just wanting to meet with, get to know, and gossip with as many other people as possible, for no definite reason! That is, we have to accept that one of the purposes may be just to 'meet' people for no other definite purpose!
So, where am I going wrong?
What more is there to be said?
Where can we be more clear on the data, purposes, processing, and future of social whatever via the Internet?
I am looking to find a ZKP scheme for humans. Instead of remembering passwords, the user can remember a unique function that has a certain property. Services then probe the user to prove that they know such a function, without revealing the function to the service.
The problem is, I'm not good at cryptography. I need to find a set of functions that one such function is
- is easy to evaluate in my head
- is possible for me to remember
- can be pre-calculated
- reasonably fast to use in ZKP
One example of a function is to imaging a 3D cellular shape with holes. The challenge type is a list of "discrete movement through the space", and the response is a list of "crossing the boundary of the 3D shape".
I hope you hackers have a better idea of what types to use for `C`, `R`, and how to choose `generate_f`.
details in pseudo-code:
```idris2
-- this is public
-- the secret `f` generator
-- should discourage rainbow table
generate_f : (random_seed: Seed) -> P -> F
-- this is private
-- the user remembers a function `f` with certain property in set `P`
-- this function should be easy to remember and calculate for humans
F : Type
F = (challenge: C) -> (response: R)
f : F
-- this is public
-- prove that the function is generated with `property` from its response
-- this should be straight-forward to implement using ZKP
verify : (property: P) -> (challenge: C) -> (response: R) -> bool
```
I get a lot of documents that are filed throughout the day. The documents are submitted and can be modified and resubmitted. Documents that are submitted are part of a lineage (I file A then I file B (I then file an amended B[B.1] and another amendment [B.1.1] then I file C (where C ultimately references B.1.1 as the "final" prior step in the filing process). End workflow. There may also be file D, then E, then F or G. If F->F2, If G->G2).
Those documents are filed by users and reference other users. Those users also file stuff and can resubmit, etc.
The question I have is:
What are the software design patterns or concepts I should be using for this type of problem? Is this best suited to a graph problem? The document dependency flow seems to be suited to an airflow type solution, but I'm not sure.
Thank you for any guidance. I will happily engage and unpack the problem with anyone who has the time or interest to reply.
1. The issue search doesn't display snippets of comments
2. The global search https://github.com/search displays only a single snippet per issue
3. Because of 1., I have to open the found issues one by one. But before I can "CTRL+F" I must click "Load more..." several times to ensure I'm searching through all the comments. Even finding and clicking all the "Load more..." links is challenging since they are displayed in various places and sometimes there's more than one such link.
Just disabling the "Load more..." links (ensuring all comments are loaded) would be a big help for me. Alternatively, I could use a 3rd-party Github client with a better search.
Anyone knows about something that could help?
What I was initially thinking of doing was borrowing a little bit of money from my family. It would be borrowing a few hundred here or there to get by month-to-month. The problem is that if I borrow even a few hundred in that context and it allows me to succeed, it might not be entirely fair if I don't somehow distribute the rewards.
So then I started thinking about making a very simple spreadsheet for equity share. There are some serious problems here for me though. First one is, this is supposed to be a bootstrapped effort, not a huge venture-backed risk that is aiming to sell for millions of dollars. I don't know exactly what the equity would mean or when it would be distributed back to family who helped me. I also think that whole valuation idea is questionable actually.
So my idea is this: I will make a spreadsheet distributing some percentage of all money coming in. Maybe 10%. Anyone who invests gets their portion of that gross fee pie sent to them at least every month.
So if I have 300 people sign up and their monthly fees total $5000, any family who helped me would be splitting $250 or $500 each month with percentages of that dependent upon how much they gave me. They would not however have the right to direct the actions of my business in any way.
If I can get by without needing more money, then I don't take anymore. If I have to then I do, and that dilutes the original 5% or 10% of gross fees/income. (These rules are explained ahead of time of course to anyone who helps me).
The advantage of this: I know exactly how much its going to cost me, people know how much they can gain and it reduces their risk. I retain control.
Any comments or advice about these ideas? Thanks so much.
Anyways, if you've got a minute and can take a look (link at bottom), recommend any tools/tooling missing, and any good books that speak to these topics. Or, any constructive criticism that occurs to you as it might be entirely to sparse on the details. Thanks!
http://kleevr.blogspot.com/2008/05/c-build-automation-using-svn-havent.html
* I just realized I may have been using 'Ask YC:', when I should've been using 'Ask HN:',... or perhaps 'Ask 1:' *
and just remember shift-8 makes things italic
void swap(int& a, int& b) {
int old_a = a;
a = b;
b = old_a;
}
Because the first assignment to 'a' destroys the old value of 'a', we have to store it in 'old_a' in order for the swap to work. In contrast, we could write it like this:
void swap(int& a, int& b) {
a' = b;
b' = a;
}
where a' means "the new value of a", but this syntax has no equivalent in real C++. This may seem like an insignificant problem at first, but when you scale up to something the size of a Game Boy simulation which has thousands of variables that need to change simultaneously it becomes a serious design problem.
GateBoy solves this problem by instrumenting every variable in debug builds to catch "old/new" bugs - variables get marked as "old" or "new" during execution and reading a "new" value when you expect an "old" value is a runtime error. Metron takes a different tactic and does some symbolic code analysis at translation time to do essentially the same thing - it can ensure that for every possible path through the code, all reads of "old" values are actually reading old values and vice versa for "new" values.
If we generalize the problem a bit, we can say that the difficulty comes in trying to model a system where the entire state of the system represented as X needs to be transformed in to a new state X' via a pure function F without constantly making copies of old state (kills performance), requiring the author to keep track of which parts of the state are old or new at any given point (causes bugs), or relying on hardware support like transactional memory (not widely available). To put it more concisely, a program in the form "x' = f(x)" has no good representation in the software programming languages we use today.
I recently had the opportunity to discuss this issue with a bunch of grizzled old software and hardware veterans, and the approximate consensus seems to be:
1. "x' = f(x)" as a model for global atomic state change makes sense to both software and hardware developers, with some viewing it as "just another name for a state machine" (mostly software devs) and some as "so obvious that it doesn't need to be stated" (mostly hardware devs).
2. There really isn't any software-oriented language out there that allows for global atomic state change to be both performant (the compiler understands the distinction between "old" and "new" and can reorder code to avoid excessive copies or temporaries) and unambiguous (the distinction between an "old" value and a "new" value has some explicit representation in the language syntax).
So, what do we call this model? Allowing the compiler to reorder statements to preserve "oldness" and "newness" during execution seems to diverge from the "a program is a sequence of operations" model of procedural programming, and the fact that we _do_ want to modify X in place instead of constantly creating new (potentially very large) state objects makes it a poor fit for functional programming.
So, my question to the Hacker News audience - Does it makes sense to call "x' = f(x)" a programming paradigm? It's certainly not a new one, but it also doesn't fit well with the paradigms we've given names to. What should we do with it?
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3163673/?report=reader
Presenting paginated content in a multi-column layout has its advantages. First and foremost, it's better adapted to widescreen displays -- instead of a narrow column of text surrounded by huge margins on both sides, the entire window area is filled with text, allowing the reader to see much more content without scrolling. The user can maximize the browser window to fully utilize the screen area (the 2-column layout then becomes a 3-column layout).
One disadvantage is that you can no longer Ctrl+F to find a keyword within the article. The page has to provide its own search functionality. This is of course not an inherent problem and could be solved with proper browser support.
Do you think multi-column, paginated text has a future for general Web content (outside of ebook readers etc.)?
Now, we're starting to see a similar effect with cryptocurrency and blockchain technology. NFTs are a new technology that could potentially be applied to the problem of making music profitable again.
What if *FRACTIONAL OWNERSHIP* can be the solution?
You can read more about my 5-minute report in The Crypto Journal newsletter
https://thecryptojournal.substack.com/p/the-crypto-journal-issue-002
the last few weeks I tried getting a small boilerplate going for running fullstack applications for almost free using free tiers of several services (CircleCI, AWS Lambda, etc) and to enable development focused only on the code. Honestly it should have just been inserting a few secrets into the build tool (or commit it, private repos are free and who cares) and being up to speed but that didn't really work out.
I realized that running non-lambda languages is a problem where I usually would have went with Zeit Now's Serverless Docker, but that seems a non option and the best way I'd do it if I was willing to spend money was a AWS Fargate Terraform file but, not everyone can (and wants) to use that for simple prototyping.
So my question to you all is: How would you architect an infrastructure to enable quick prototyping for nearly no cost?
The nearly no cost is important for me as I know a lot of developers who can't even afford the 5$ it costs f.e. to run a AWS EC2 instance and aren't even versed with it.
Working locally is always nice obviously but there are waaaaay too many tutorials that speak of "production ready" and are "ssh into vps, git pull and docker run" at best.
Why do we apply a limited-resource currency (the US dollar, for example) to an unlimited-resource commodity such as music, games, ebooks (digital goods)? Instead, let's create a new currency that copies itself whenever someone digitally shares (copies) a good. This solves the imbalance between content creators, distributors, and consumers. Finally, a currency exchange or marketplace that converts between Copy Currency and commodity currency would solve the "what's my music/video game/etc. worth?" problem.
-----
Introducing Copy Currency (and its Positive Sum Economy)
Our current monetary system was founded on the idea of the exchange of limited commodities. In order to keep prices stable, the money supply must grow (and shrink) with the total value of commodities in circulation.
But digital goods are different because they are not limited--infinite copies can be produced at zero or near-zero cost. What if we created a currency that deals in digital goods--let it grow (and shrink) with the value of information in circulation? Could we reward artists as well as distributors fairly? What if we created an information currency in which transactions are not a zero-sum exchange?
For example, digital music is a "copyable good" and will serve for now as an example of a positive-sum transaction: Suppose person A writes, performs, and records a song. She then sells it to person B. When person B gets the music, he can sell it to person C. In selling from B to C, person A (the content creator) is also given an increase in her account. So for example let's say that A, B and C each start with $1 in their accounts:
A B C
1 1 1
($3 total in this economy)
Now, let's say B buys the music from A:
A B C
2 0 1
($3 total in this economy)
And finally, C buys the music from B:
A B C
3 1 0
($4 total in this economy)
Hey, where did the new money come from? It was "minted" digitally by the act of selling music and by participating in the distribution of that music. Person A was given an additional $1 at the same time that person B was given $1 from C.
What this system would promote is creativity and sharing. Being the first person to share something of value (to introduce it to the digital economy) is rewarded many times over by the social network. And intermediate steps in the social network--friends sharing with friends--are also rewarded. It turns the idea of piracy on its head--it becomes a feature of the system and the system can then properly represent artists and reward people in the middle for participating in the advertising & distribution of goods.
(Aside: We could also apply a global replication constant to each transaction (rather than a fixed amount being replicated through the transaction chain). For example, what if we had a 2/3 replication constant so that if person C buys the music from B for $1, then B gets $1 and A gets $0.66. This adds some complexity but provides a lever through which the global digital economy can be adjusted based on the creativity-to-sharing ratio.)
So one problem is that people can potentially game the system if we allow cycles in the graph of giving. For example, if B is now allowed to purchase the same music from C, then everyone gets $1, including B! That can't be allowed, or else copy currency wouldn't give money a NEW meaning, it would have NO meaning. This constraint on the system can be achieved through technological means. In fact, it is quite possible that we could build copy currency from the open source BitCoin software base. The BitCoin system already includes transaction chains and a record of where money came from and whose hands it passed through (all anonymously). This could be leveraged as long as there is a way to implement the acyclic transaction rule.
I think this system could operate in parallel to a commodity-based currency. For products or services that have limited supply, we would use regular money. For products that can be copied or shared without the use of limited commodity resources, we could use copy currency.
The introduction of a new digital good into this economy has the potential to impact the economy in one of several ways, depending on the network topology of the distribution of the good. For example, if the economy consists of just 10 people, it's possible that a $1 good could result in an increase of $36 (1+2+3+4+5+6+7+8+9-9) to the economy. This scenario would only happen if all members have at least $1 each, and if the distribution topology is linear (A shares with B, B shares with C, C shares with D, and so on through J).
The following economy starts with $9 and ends with $45:
A B C D E F G H I J
0 1 1 1 1 1 1 1 1 1
1 0 1 1 1 1 1 1 1 1
2 1 0 1 1 1 1 1 1 1
3 2 1 0 1 1 1 1 1 1
4 3 2 1 0 1 1 1 1 1
5 4 3 2 1 0 1 1 1 1
6 5 4 3 2 1 0 1 1 1
7 6 5 4 3 2 1 0 1 1
8 7 6 5 4 3 2 1 0 1
9 8 7 6 5 4 3 2 1 0
($45 total in this economy)
On the other hand, if everyone goes directly to the source (A) then there is no net increase in the economy, only an increase for A:
The following economy starts with $9 and ends with $9:
A B C D E F G H I J
0 1 1 1 1 1 1 1 1 1
1 0 1 1 1 1 1 1 1 1
2 0 0 1 1 1 1 1 1 1
3 0 0 0 1 1 1 1 1 1
4 0 0 0 0 1 1 1 1 1
5 0 0 0 0 0 1 1 1 1
6 0 0 0 0 0 0 1 1 1
7 0 0 0 0 0 0 0 1 1
8 0 0 0 0 0 0 0 0 1
9 0 0 0 0 0 0 0 0 0
($9 total in this economy)
And finally, there is the case of something in between which is most likely the case in a natural sharing setting. Here, the distribution happens in parallel because transactions are made simultaneously between those who do not yet have the digital good, and those who do:
A B C D E F G H I J
0 1 1 1 1 1 1 1 1 1
3 0 1 1 1 0 1 1 1 0
7 1 0 1 0 2 0 1 0 1
9 2 1 0 0 2 0 0 1 2
($17 total in this economy)
In all cases, the person who created or introduced the copy good is rewarded $9--once for each person in the economy who benefited from the good. The difference is in how intermediate parties are rewarded: the money supply as a whole increases most when everyone colludes to participate in a linear distribution run. Working against this theoretical maximum, however, is the desire for each person to be "first" to share a good, because the closer to the root of the tree a person is, the more benefit there is for that person (similar to pyramid schemes). Therefore, optimal distribution--log(n) sharing (where n is the number of people in the economy)--pulls the increase in money supply down to (possibly) reasonable values. It would be interesting to simulate this and model the possible rates of inflation.
Copy currency is a theoretical and technological solution to today's issue of the mercantile paradigm being erroneously applied to information economies. It is not a technological solution like Digital Rights Management (DRM), however: transactions are still voluntary and trust is maintained at the transaction level. People are much more likely to want to participate in a system that gives them power and freedom than one that tries to limit their choices. It also bridges the gap between "producer" and "consumer" by mathematically relating the roles and providing a system in which people can easily switch roles depending on the degree to which they are a gatekeeper within their social networks for particular information.
Limiting Account Creation Fraud
New copy currency accounts could be created in a variety of ways. The main requirement is to reduce fraud by limiting one account per person. An account could be limited by requiring a unique cell phone number and using a text-message based confirmation.
Alternatively, we could try using a system similar to most of life on earth: it takes two account holders to create a new one. In addition, you can't marry family, up to second cousins--in other words, you can't collude with your close friends to create new accounts ad infinitum. These rules would be imposed in order to prevent people from accumulating a large number of accounts, and then taking advantage of the system by "sharing" with themselves. Lastly, copy chains would not reward ancestors of the account--so if accounts A and B create account C, A and B can never monetarily benefit from C, no matter how far removed from C in a transaction chain (i.e. A sells to X, X sells to Y, Y sells to Z, Z sells to C... A gets no reward for the last of these transactions since A is an ancestor of C).
Undeveloped Questions & Thoughts
Is it possible to convert copy currency to commodity currency and vice versa? (Perhaps the solution to this problem would become THE solution to pricing infinite goods in our modern commodity-based economy?)