> You seem to be overlooking the bald-faced lie told by Meta/IG that someone's new account is violating "Community Guidelines" before they can even use it.
I don't know about OP, but the the article they linked had a screenshot showing that the Community Guidelines they'd violated were around "account integrity". Looking at those[1], it seems plausible that OP and the article's author used something during account creation that triggered an integrity system, similar to what the parent was describing. Maybe they used a proxy/VPN, or something else that caused the robots to think that they were "Creat(ing) an account by scripted or other inauthentic means."
I don't think that big tech deserves a free pass on much, but to think they they're suspending accounts just to harvest phone numbers seems like it would be something they'd likely get into deep shit over: stock price drop, huge fines, CEO in front of Congress-type of thing. I doubt it would be worth it to them.
> the article they linked had a screenshot showing that the Community Guidelines they'd violated were around "account integrity". Looking at those[1], it seems plausible that OP and the article's author used something during account creation that triggered an integrity system, similar to what the parent was describing. Maybe they used a proxy/VPN, or something else that caused the robots to think that they were "Creat(ing) an account by scripted or other inauthentic means."
Compare my HN username to the domain name of the linked article. I am the author.
I did not use a proxy or VPN.
> to think they they're suspending accounts just to harvest phone numbers seems like it would be something they'd likely get into deep shit over: stock price drop, huge fines, CEO in front of Congress-type of thing. I doubt it would be worth it to them.
For what it's worth, I got the same "account integrity" explanation. Until proven otherwise I'm assuming that's the same canned response they always give. I did not use a proxy or a VPN, and I did not use an anonymous email address like a protonmail account or something similar.
I bought some cheap IP cameras to monitor our cabin while we're not there, and every single time we arrive I unplug them. This makes me feel vindicated that I'm not just paranoid; the creeps really are one buggy firmware away from selling videos of your family on the Internet.
The oldest copper artifacts we have are from the old copper complex around Lake Superior, and more recent metalworking traditions were present throughout much of the Americas.
Archaeological evidence has not revealed metal smelting or alloying of metals by pre-Columbian native peoples north of the Rio Grande; however, they did use native copper extensively.[32]
Which then further cites: George Rapp Jr, Guy Gibbon & Kenneth Ames (1998). Archaeology of Prehistoric Native America: an Encyclopedia. New York: Taylor & Francis. p. 26.
I do not own this book, so I apologize I cannot verify the citation.
I didn’t realize how easy native copper smelting was until I watched some of those primitive YouTube channels where they make a water filtration system and smelt some copper using a hand made smelter. Linked below if anyone is interested.
It's not exactly easy - requires good quality coal, and fine tuning to get the right airflow to coal ratio to create a sufficiently hot furnace without becoming oxidizing. Those guys describe having to try multiple times before working out a successful process to produce copper, documenting a couple of their experiments in videos. They've nicely fully documented their final configuration in this video - some high quality experimental archeology work: https://www.youtube.com/watch?v=KYaJuab5riE
Keep in mind that you have to be wary of poisonous fumes - hopefully the elements giving their copper alloy a brassy appearance don't include lead, or worse, arsenic.
Native copper working is considered metallurgy in archaeology. If you disagree, feel free to take it up with the archaeometallurgists.
Moreover, purification from ores was done. The so-called copper bells of the southwest were mainly produced from ores rather than native copper and were often alloyed with arsenic or silver to modify the color. The main center of production was paquimé (~180mi west of the Rio Grande), but the cultural area extends well over the border into AZ, NM, and CO.
My understanding that advanced stone, shaped with metal, lasts. However, simple stone tools (think axes, hammers, flint arrowheads etc) tend to get easily lost. This is made furthermore difficult by the difficulty (impossibility?) of distinguishing them on tools like metal detectors
No, they're quite easy to find if you know what you're looking for and you look in the right places. Lithics in the ancient world were a bit like plastics today: ubiquitous and highly disposable.
For particular lithics industries/tools and certain situations you might get some amount of reworking, but ultimately people were producing new tools very frequently. That means that any area in which you might find them on the surface will usually have some and a long term production site will have overwhelming artifact density. They can occasionally look like normal rocks, but once you train your eye the worked faces become visually distinct.
I think most folks would have more fun with a metal detector and a map of Civil and Revolutionary War military camps and battlegrounds, or even just poking around ghost towns or old farmhouses, than looking for pre-Columbian artifacts, up here north of the Rio. It's largely disappointing and if you actually manage to sort-of prove that the broken barely-recognizable spear point you found came from some named group of people circa 800 CE, you'll inevitably find that we know almost nothing about them and that what we do know makes them sound exactly like every other group around that time, since we just don't know much about any of them besides trite generalities.
As a Model X owner who rarely engages Autopilot for this exact reason, I concur. The number of times the car has suddenly started braking for no good reason (clear weather and good road conditions, few cars around) is way too high.
Yes. For new readers, suggestion for this article: start with the last sentence. Then skim the article re AI. Then respond. So we don't have to keep saying/hearing "Why not both?". The author's take is: both!
I guess author makes the distinction from deterministic Vs non-deterministic algorithms. As a layman in these matters I still don't fully understand how some people classify machine learning algorithms as non-deterministic...
...i started wondering about this when I was reading a few articles about the AlphaZero algorithm that learned to play chess entirely from self play and wondered if it would always play the same moves in response to a fixed set of opponent moves (assuming the opponent starts as white).
My guess was that it wouldn't always respond in exactly the same way in case there's any MCTS like step somewhere in there blended with the Machine learning algorithm.
For a game like chess it would seem to make sense that the overall algorithm would still include a MCTS step (like AlphaGo did) but for an autonomous car it would seem crazy to any human to imagine that there would be any random search for a decision in a tree of possible interpretations of the input for example.
Does any one have any detailed knowledge about this? Would a non-deterministic algorithm ever be allowed in an autonomous car?
> As a layman in these matters I still don't fully understand how some people classify machine learning algorithms as non-deterministic...
Some algorithms start with random numbers for the model and converge towards a better model. After the model is generated the input->output will be deterministic, but since the model generation is non-deterministic the algorithm overall is considered non-deterministic.
That may be, but it seems that Akamai CDN is still giving them a data feed. From that press release:
"As a result of the pixel-free technology partnership, MediaMath's clients will gain access to more data for audience segmentation, retargeting, and optimization, with quick and easy activation."
I'm curious to know how people feel about offline (pre-transformed) vs. on-demand transformations. Are there any HN'ers out there that have worked on a site with a large set of images, and have an opinion on this? Adobe's Scene7 product works in an offline mode as far as I can tell, and seems to have captured a large segment of retail companies with product catalogs.
I had worked for a social network, and our system provides a function to let user upload their photo then transform it to some fixed size of original one. We did have pre-transformed and on-demand too.
Pre-transformed for the image that's most viewed by user, like new feed's photo (720x720), large photo (1024x768), and the origin one (if user's screen is detected as big screen), we have to resize it asap. Other sizes, like thumbnail, we do on-demand transform using nginx resize filter plugins, and caching using varnish and/or traffic server.
That system have been working well until this time.
I would say on-demand transformation is good idea, since you don't have to store resized-image that's never viewed by any user, so you save your storage. But that idea must be implemented well, very well if you're going to serve million users.
San Mateo, CA - Media Service Performance Engineering
Akamai (http://www.akamai.com) has a number of open positions around the globe, but we're specifically looking for talented engineers to join our Media Service Performance team. A brief description is below, but I've worked in the Service Performance group for a number of years and can try to answer any questions you may have about what we do. Reach out to me via my personal email address in my HN profile.
Overview:
Media Performance is the Akamai group with end-to-end responsibility for ensuring that our Media services are performing well. A well-performing service, in addition to being fast and available, also needs to be robust and well-operated. Media Performance team members need to have very strong communication skills to enable them to work across all areas of the company (especially engineering, operations, networking, and technical services).
Responsibilities:
* Collect and analyze data from a network serving 10s of millions of hits per second to discern trends and anomalies.
* Work in a distributed network / content delivery environment on Linux and Windows, applying advanced skills in network diagnostics and debugging tools, and the related network protocols and implementations, routing protocols, and application level protocols to measure, analyze, characterize and improve performance, robustness, availability and scalability of large distributed content delivery systems.
* Identify and implement new approaches to improving performance and reliability, including scoping, designing, and implementing software features for new and existing software systems, from kernel changes to distributed server applications.
* Prototype substantial system modifications to serve as proofs of concept for large system development initiatives.
* Work in and with teams across all technical areas in the Company including engineering, customer care and professional services to enable innovative new solutions in both live and test network for complex issues that span multiple technologies and services often to meet specific customer needs.
One thing that's interesting is that the Navigation Timing API available in modern browsers varies from being wildly incorrect to accurate, depending on the browser. In a quick test on my Mac, Firefox 13 showed an incorrect result (basically, the same issue as WPT and Gomez - almost immediate TTFB), while Chrome 20 correctly detected the 10s TTFB.
Agreed, it's a great alternative to mean for performance data - Keynote even has it as an optional aggregation function when viewing data. Heck, if you have your performance data in a MySQL database, it's as simple as 'select exp(avg(ln(myVal)))', and doesn't require you to install a UDF, as you would if you wanted to find the median.
I don't know about OP, but the the article they linked had a screenshot showing that the Community Guidelines they'd violated were around "account integrity". Looking at those[1], it seems plausible that OP and the article's author used something during account creation that triggered an integrity system, similar to what the parent was describing. Maybe they used a proxy/VPN, or something else that caused the robots to think that they were "Creat(ing) an account by scripted or other inauthentic means."
I don't think that big tech deserves a free pass on much, but to think they they're suspending accounts just to harvest phone numbers seems like it would be something they'd likely get into deep shit over: stock price drop, huge fines, CEO in front of Congress-type of thing. I doubt it would be worth it to them.
1. https://transparency.fb.com/policies/community-standards/acc...