Huh. I’m kind of surprised that he’s using big-C Communism. I guess it could be intentional.
I’d kind of think that most people on the left would be wanting to talk about little-c communism.
Huh. I’m kind of surprised that he’s using big-C Communism. I guess it could be intentional.
I’d kind of think that most people on the left would be wanting to talk about little-c communism.
The physics of a strategic atmospheric bomber hasn’t changed. The B52 is close to optimal in shape for the task.
I mean, the B-1 and the B-21 are also strategic bombers, came out later, are still in the US inventory, and they look pretty different. I don’t know if I’d agree with an argument that the natural convergence is towards the B-52.
I think that a better argument is that the B-52 still effectively fills a desired role better than other options in 2025, but I don’t know if I’d say that that encompasses all strategic bombing.
They’re pretty large. I’m thinking butter-drenched hominy.
EDIT: Now I want hominy with butter.
EDIT2: https://cookpad.com/us/recipes/16952451-hominy-simmered-in-butter
Well, he’d like Google+, I guess. I’m pretty happy with pseudonymity, though.
An existing field?
I mean, if not, I think that it’d be a pretty easy call to be one that nobody is in, preferably one from the future – now you have access to a bunch of unique and thus highly-valuable knowledge.
I’m American – I don’t know if you just want British takes – but I don’t think that there’s a massive functional difference between a (non-parole, don’t know if that’s the case here) life sentence and a death sentence. Both mean that, absent some kind of pardon, the person isn’t going to be interacting with society. Maybe in a situation where someone’s managed to be repeatedly dangerous in a prison with the highest levels of security, a death penalty means that they can’t manage to kill someone in prison. I think that either probably acts as about the strongest deterrent that you can get out of the justice system; I’m a little skeptical that someone’s going to say “I will do this crime if I’m only facing life, but not if death”. That being said, I’ve no particular objection to the death penalty, either, don’t agree with people who have tremendous objection to it. I don’t think that it provides a great deal of extra utility over life-with-no-parole, though.
investigates
I’m assuming that he will eventually become eligible for parole, as it looks like he’s 18:
The 18-year-old refused to come into the courtroom as he was sentenced at Liverpool Crown Court, having been removed from the dock earlier due to disruptive behaviour – which included demands to see a paramedic and shouts of “I feel ill”.
https://en.wikipedia.org/wiki/Life_imprisonment_in_England_and_Wales
In England and Wales, life imprisonment is a sentence that lasts until the death of the prisoner, although in most cases the prisoner will be eligible for parole after a minimum term (“tariff”) set by the judge. In exceptional cases a judge may impose a “whole life order”, meaning that the offender is never considered for parole, although they may still be released on compassionate grounds at the discretion of the home secretary. Whole life orders are usually imposed for aggravated murder, and can only be imposed where the offender was at least 21 years old at the time of the offence being committed.
EDIT: Oh, yeah, I guess the title implied a possibility of parole with “…minimum 52 years”
Yeah, but that’s just a guess. Looking further, it sounds like it was indeed added by the printer, not on the artist’s end, but that the art they sent was just black-and-white, with instructions to the printer as to what to what white areas to shade:
https://thenib.com/color-archive/
Cartoonists don’t create these dots—they’re added during print production. Cartoonists started with pen and ink on paper. Starting in 1894, their drawings were shot as photographic negatives, which were then exposed under intense light onto photosensitized zinc metal plates.
To add tints, cartoonists (or often an assistant or colorist) roughly marked up their drawings, indicating which grays or colors should appear in which areas. Comics syndicates created a limited set of colors to choose from based on mixing tints at different percentages, and those numbers would be marked on the comic. Some cartoonists used color pencil or watercolors on their original or a copy as an additional guide.
In production, engravers took these zinc plates—a single black plate for weekdays, and four separate ones for Sunday color comics—and painted around each area that needed tone using a water-soluble material called gamboge. They applied an oily ink to a sheet in a frame somewhat resembling a silk screen called a Ben Day screen that was covered with tiny dots for the desired tint. The engraver then placed the screen over the areas on the zinc plate that needed tint applied and used a burnisher to rub down the Ben Day pattern. They then washed the gamboge off. They might have to do this dozens to hundreds of times for a Sunday strip.
It sounds like that analog process became a digital one sometime around the 1970s, but still had the “artist-annotated image” approach.
If you mean the dot pattern on the can, I believe that that’s halftoning, which would let a black-and-white newspaper print something other than black-and-white images. I’m not familiar with the process, but my guess is that the way that probably worked was that Larson sent in something with a flat shade of gray and then the halftoning was generated digitally for it and anything else that used shades of gray on the print side.
Ditto, though the LoRA might explain it.
The base model, Pony Diffusion, doesn’t normally act like that in my experience. Maybe if you generated an enormous number of images and culled them.
The simplicity of the style might also help – less to change.
Maybe hold off on the homicide. I bet someone has done something like that.
kagis
https://andregarzia.com/2019/07/livecode-is-a-modern-day-hypercard.html
LiveCode is a modern day HyperCard and everyone who used HyperCard will feel at home at it.
LiveCode runs on macOS, Windows and Linux and can generate standalone binaries for all those platforms plus Android and iOS. You can get it from https://www.livecode.com/ or you can get a GPL version of it from https://www.livecode.org/.
The language looks reminiscent of my memories of HyperTalk.
ARM: Powering 95% of smartphones globally.
Physical presence matters - can’t build rockets remotely
ARM – a fabless design company – doesn’t seem like a very good example to be using if the argument is “manufacturing and design need to be physically colocated”.
Hypercard was free for a while.
Yeah, and I wrote some stuff in HyperTalk, but IIRC it turned into some sort of Hypercard-the-authoring-environment and Hypercard-the-player split, with the player being redistributable.
kagis
https://en.wikipedia.org/wiki/HyperCard
At the same time HyperCard 2.0 was being developed, a separate group within Apple developed and in 1991 released HyperCard IIGS, a version of HyperCard for the Apple IIGS system. Aimed mainly at the education market, HyperCard IIGS has roughly the same feature set as the 1.x versions of Macintosh HyperCard, while adding support for the color graphics abilities of the IIGS. Although stacks (HyperCard program documents) are not binary-compatible, a translator program (another HyperCard stack) allows them to be moved from one platform to the other.
Then, Apple decided that most of its application software packages, including HyperCard, would be the property of a wholly owned subsidiary called Claris. Many of the HyperCard developers chose to stay at Apple rather than move to Claris, causing the development team to be split. Claris attempted to create a business model where HyperCard could also generate revenues. At first the freely-distributed versions of HyperCard shipped with authoring disabled. Early versions of Claris HyperCard contain an Easter Egg: typing “magic” into the message box converts the player into a full HyperCard authoring environment.[15] When this trick became nearly universal, they wrote a new version, HyperCard Player, which Apple distributed with the Macintosh operating system, while Claris sold the full version commercially. Many users were upset that they had to pay to use software that had traditionally been supplied free and which many considered a basic part of the Mac.
Hmm. Sounds like the interaction was more-complicated than just that.
Today’s users have massive amounts of computer power at their disposal, thanks to sales of billions of desktop and laptop PCs, tablets and smartphones. They’re all programmable. Users should be able to do just enough programming to make them work the way they want. Is that too much to ask?
Smartphones – and to a lesser degree, tablets – kind of are not a phenomenal programming platform. Yeah, okay, they have the compute power, but most programming environments – and certainly the ones that I’d consider the best ones – are text-based, and in 2025, text entry on a touchscreen still just isn’t as good as with a physical keyboard. I’ll believe that there is room to considerably improve on existing text-entry mechanisms, though I’m skeptical that touchscreen-based text entry is ever going to be at par with keyboard-based text entry.
You can add a Bluetooth keyboard. And it’s not essential. But it is a real barrier. If I were going to author Android software, I do not believe that I’d do the authoring on an Android device.
When Dartmouth College launched the Basic language 50 years ago, it enabled ordinary users to write code. Millions did. But we’ve gone backwards since then, and most users now seem unable or unwilling to create so much as a simple macro
I don’t know about this “going backwards” stuff.
I can believe that a higher proportion of personal computer users in 1990 could program to at least some degree than could the proportion of, say, users of Web-browser-capable devices today.
But not everyone in 1990 had a personal computer, and I would venture to say that the group that did probably was not a representative sample of the population. I’d give decent odds that a lower proportion of the population as a whole could program in 1990 than today.
I do think that you could make an argument that the accessibility of a programming environment somewhat-declined for a while, but I don’t know about it being monotonically.
It was pretty common, for personal computers around 1980, to ship with some kind of BASIC programming environment. Boot up an Apple II, hit…I forget the key combination, but it’ll drop you straight into a ROM-based BASIC programming environment.
After that generation, things got somewhat weaker for a time.
DOS had batch files. I don’t recall whether QBasic was standard with the OS. checks it did for a period with MS-DOS, but was a subset of QuickBasic. I don’t believe that it was still included by later in the Windows era.
The Mac did not ship with a (free) programming environment.
I think that that was probably about the low point.
GNU/Linux was a wild improvement over this situation.
And widespread Internet availability also helped, as it made it easier to distribute programming environments and tools.
Today, I think that both MacOS and Windows ship with somewhat-more sophisticated programming tools. I’m out of date on MacOS, but last I looked, it had access to the Unix stuff via brew
, and probably has a set of MacOS-specific stuff out there that’s downloadable. Windows ships with Powershell, and the most-basic edition of Visual Studio can be downloaded gratis.
Okay, I’m going to be the Debbie Downer here – I don’t think that Ukraine’s going to be able to do that in the kind of timeframe that would matter, if the aim is ballistic missile defense rather than just a long-range antiaircraft missile. The US spent a lot of time and resources on ballistic missile defense to get where it was.
it could have defended the Trypillia Thermal Power Plant from the Russian attack
It sounds like that was a ballistic missile attack:
https://www.pravda.com.ua/eng/news/2024/04/12/7450897/
The Trypillia thermal power plant, which was completely destroyed by a Russian ballistic missile attack on 11 April
Andrii Hota, Chairman of the Supervisory Board of PJSC Centrenergo, Ukraine’s national energy company, believes that given the constant danger of Russian attacks, the rebuilding of the Trypillia Thermal Power Plant (TPP) in Kyiv Oblast without providing Ukraine with air defence systems is an “exercise in futility”.
The Western alternative that I’m aware of is the Aster 30 fired by the SAMP/T from Eurosam, and from what I was reading earlier in the war – and it’s possible that things might have changed, if Eurosam managed to figure out and address whatever the issue is – it sounded like Russian ballistic missiles were getting past those. Zelenskyy had some very unhappy comments about how some other missile than Patriot wasn’t able to intercept ballistic missiles, and I didn’t see any other anti-ballistic-missile that had been sent to Ukraine, so I’m pretty sure that at least at that point, the Asters weren’t stopping Russia’s ballistic missiles. And if it was subsequently resolved, I haven’t seen news about it…and I’d think that both Ukraine and Eurosam would very much like to publicly release that if they had; for Eurosam, it’d be an endorsement of their weapon’s capability and for Ukraine, would be a morale-booster. I have seen a lot of news about the Patriot.
Maybe they could reverse-engineer and clone the Patriot. I would think that that would create its own political issues and I don’t know how easy it’d be to manufacture in Ukraine under wartime conditions. I can imagine that if I were Ukraine and thought I could manage it and that it were critical to the war, I might go ahead and do it and work out issues with the US later. But even that may not be practical, given what I’ve seen as to estimates as to timeframe in the war.
I don’t know how viable it is to go do a new ABM system from scratch, including all the testing and research, especially under the constraints that they’re working with (time, resources, Russia probably placing any development on Ukrainian territory that it can find as a top-priority target). Turkey – which has its own share of headaches, though far less than Ukraine is dealing with – spent years trying to build a similar ABM system, and it doesn’t sound like they were successful. And even Lockheed Martin, which has an existing product and production lines and had started to expand capacity some time back, and doesn’t have any of the headaches that Ukraine does, is still going to take years to scale up.
While the Army has yet to fund another missile production increase, Lockheed decided in the latter part of 2022 that it would continue to invest internally to be able to build 650 a year. “Lockheed could see the demand out there,” Davidson said, adding that the company plans to hit that number in 2027.
Honestly, if I were running things in Kyiv, I think that I didn’t have a way to do active defense against ballistic missiles, I’d probably try to passively-harden a site enough against ballistic missile attack to survive it.
https://en.wikipedia.org/wiki/Trypilska_thermal_power_plant
The main assets of the Trypilska TPP were four pulverized coal and two diesel fuel units with a capacity of 300 MW each. There were also six turbines and generators with a total nominal capacity of 1,800 MW. The transformers are of the TDC-400000/330 type.
So, okay. It won’t be as efficient, and that’s going to be a hassle after the war. But maybe it’s possible to grab a field and start sticking 1MW generators in revetments sufficiently spaced that one missile cannot hit multiple of the generators. A ballistic missile is going to cost more than what one of those does. Same thing for smaller, dispersed storage tanks, transformers, etc.
That won’t make it immune to ballistic missiles. But it will provide passive resiliency, and that might be enough to kill off the utility of the ballistic missiles.
It’s not possible to do that for all industry, while ABM interceptors can cover more than just one industrial installation. If Russia can’t effectively use ballistic missiles against power generation, they’re going to use them against the next-most-important thing on their priority list. But if one considers that power infrastruture is probably about the most-critical, that might be sufficient.
Yeah, I agree that the “this particular setting is performance-intensive” thing is helpful. But one issue that developers hit is that when future hardware enters the picture, it’s really hard to know what exactly the impact is going to be, because you have to also kind of predict where hardware development is going to go, and you can get that pretty wrong easily.
Like, one thing that’s common to do with performance-critical software like games is to profile cache use, right? Like, you try and figure out where the game is generating cache misses, and then work with chunks of data that keep the working set small enough that you can stay in cache where possible.
I’ve got one of those X3D Ryzen processors where they jacked on-die cache way, way up, to 128MB. I think I remember reading that AMD decided that on the net, the clock tradeoff entailed by that wasn’t worth it, and was intending to cut the cache size on the next generation. So a particular task that blows out the cache above a certain data set size – when you move that slider up – might have horrendous performance impact on one processor and little impact on another with a huge cache…and I’m not sure that a developer would have been able to reasonably predict that cache sizes would rise so much and then maybe drop.
I remember – this is a long time ago now – when one thing that video card vendors did was to disable antialiased line rendering acceleration on “gaming” cards. Most people using 3D cards to do 3D modeling really wanted antialiased lines, because they spent a lot of time looking at wireframes, and wanted them to look nice. They were using the hardware for real work, were less-price sensitive. Video card vendors decided to try and differentiate the product so that they could use price discrimination. Okay, so imagine that you’re a game developer and you say that antialiased lines – which I think most developers would just assume would become faster and faster – don’t have a large performance impact…and then the hardware vendors start disabling the feature on gaming cards, so suddenly cards are maybe slower rendering than earlier cards. Now your guidance is wrong.
Another example: Right now, there are a lot of people who are a lot less price sensitive than most gamers wanting to use cards for parallel compute to run neural nets for AI. What those people care a lot about is having a lot of on-card memory, because that increases the model size that they can run, which can hugely improve the model’s capabilities. I would guess that we may see video card vendors try to repeat the same sort of product differentiation, assuming that they can manage to collude to do so, so that they can charge people who want to run those neural nets more money. They might tamp down on how much VRAM they stick on new GPUs aimed at gaming, so that it’s not possible to use cheap hardware to compete with their expensive compute cards. If you’re a vendor and thinking that blowing, say, 2x to 3x the VRAM current hardware has N years down the line is reasonable for your game, that…might not be a realistic assumption.
I don’t think that antialiasing mechanisms are transparent to developers – I’ve never written code that uses hardware antialiasing myself, so I could be wrong – but let’s imagine that it is for the sake of discussion. Early antialiasing ran by using what’s today called FSAA. That’s simple and for most things – aside from pinpoint bright spots – very good quality, but gets expensive quickly. Let’s say that there was just some API call in OpenGL that let you get a list of available antialiasing options (“2xFSAA”, “4xFSAA”, etc). Exposing that to the user and saying “this is expensive” would have been very reasonable for a developer – FSAA was very expensive if you were bounded on nearly any kind of graphics rendering, since it did quadratically-increasing amounts of what the GPU was already doing. But then subsequent antialiasing mechanisms were a lot cheaper. In 2000, I didn’t think of future antialiasing algorithm improvements – I just thought of antialiasing entailing rendering something at high resolution, then scaling it down, doing FSAA. I’d guess that many developers wouldn’t either.
The base game’s campaign was meh, kinda repetitive. The expansions – and player-made adventures – improved on it a lot.
So, I’ve seen this phenomenon discussed before, though I don’t think it was from the Crysis guys. They’ve got a legit point, and I don’t think that this article does a very clear job of describing the problem.
Basically, the problem is this: as a developer, you want to make your game able to take advantage of computing advances over the next N years other than just running faster. Okay, that’s legit, right? You want people to be able to jack up the draw distance, use higher-res textures further out, whatever. You’re trying to make life good for the players. You know what the game can do on current hardware, but you don’t want to restrict players to just that, so you let the sliders enable those draw distances or shadow resolutions that current hardware can’t reasonably handle.
The problem is that the UI doesn’t typically indicate this in very helpful ways. What happens is that a lot of players who have just gotten themselves a fancy gaming machine, immediately upon getting a game, go to the settings, and turn them all up to maximum so that they can take advantage of their new hardware. If the game doesn’t run smoothly at those settings, then they complain that the game is badly-written. “I got a top of the line Geforce RTX 4090, and it still can’t run Game X at a reasonable framerate. Don’t the developers know how to do game development?”
To some extent, developers have tried to deal with this by using terms that sound unreasonable, like “Extreme” or “Insane” instead of “High” to help to hint to players that they shouldn’t be expecting to just go run at those settings on current hardware. I am not sure that they have succeeded.
I think that this is really a UI problem. That is, the idea should be to clearly communicate to the user that some settings are really intended for future computers. Maybe “Future computers”, or “Try this in the year 2028” or something. I suppose that games could just hide some settings and push an update down the line that unlocks them, though I think that that’s a little obnoxious and would rather not have that happen on games that I buy – and if a game company goes under, they might never get around to being unlocked. Maybe if games consistently had some kind of really reliable auto-profiling mechanism that could go run various “stress test” scenes with a variety of settings to find reasonable settings for given hardware, players wouldn’t head straight for all-maximum settings. That requires that pretty much all games do a good job of implementing that, or I expect that players won’t trust the feature to take advantage of their hardware. And if mods enter the picture, then it’s hard for developers to create a reliable stress-test scene to render, since they don’t know what mods will do.
Console games tend to solve the problem by just taking the controls out of the player’s hands. The developers decide where the quality controls are, since players have – mostly – one set of hardware, and then you don’t get to touch them. The issue is really on the PC, where the question is “should the player be permitted to push the levers past what current hardware can reasonably do?”
I don’t have a problem with a model where I pay more money and get more content. And I do think that there are certain things that can only really be done with live service that some people will really enjoy – I don’t think that live service shouldn’t exist. But I generally prefer the DLC model to the live service model.
Live service games probably won’t be playable after some point. That sucks if you get invested in them…and live service games do aim at people who are really invested in playing them.
I have increasingly shifted away from multiplayer games over the years. Yeah, there are neat things you can do with multiplayer games. Humans make for a sophisticated alternative to AI. But they bring a lot of baggage. Humans mean griefing. Humans mean needing to have their own incentives taken care of – like, they want to win a certain percentage of the time, aren’t just there to amuse other humans. Most real-time multiplayer games aren’t pausable, which especially is a pain for people with kids, who may need to deal with random-kid-induced-emergencies at unexpected times. Humans optimize to win in competitive games, and what they do to win might not be fun for other players. Humans may not want to stay in character (“xXxPussySlayer69xXx”), which isn’t fantastic for immersion – and even in roleplay-enforced environments, that places load on other players. Multiplayer games generally require always-online Internet connectivity, and service disruption – even an increase in latency, for real-time games – can be really irritating. Humans cheat, and in a multiplayer game, cheating can impact the experience of other players, so that either means dealing with cheating or with anti-cheat stuff that creates its own host of irritations (especially on Linux, as it’s often low-level and one of the major remaining sources of compatibility issues).
If there are server problems, you can’t play.
My one foray where I was willing to play a live service game was Fallout 76; Fallout 5 wasn’t coming out any time soon, and it was the closest thing that was going to be an option. One major drawback for me was the requirements of making grindable (i.e. inexpensive to develop relative to amount of playtime) multiplayer gameplay was also immersion-breaking – instead of running around in a world where I can lose myself, I’m being notified that random player has initiated an event, which kind of breaks the suspension of disbelief. It also places constraints on the plot. In prior entrants in the Fallout series, you could significantly change the world, and doing so was a signature of the series. In Fallout 76, you’ve got a shared world, so that’s pretty hard to do, other than in some limited, instanced ways. Not an issue for every type of game out there, but was annoying for that game. Elite: Dangerous has an offline mode that pretends to be faux-online – again, the game design constraints from being multiplayer kind of limit my immersion.
They do provide a way to do DRM – if part of the game that you need to play lives on the publisher’s servers, then absent reimplementing it, pirates can’t play it. And I get that that’s appealing for a publisher. But it just comes with a mess of disadvantages.
It really drives me nuts that I don’t know a name for that particular type of “unperturbed woodcut face in the sun” style.
The “face in sun” thing is used in heraldry, so “sun in splendour” maybe kind of at least kind of encompasses it, but that doesn’t entail the woodcut appearance.
Maybe there is no term, but it seems so distinctive and such a common element in art that I’d think that someone would have named it.
Chicago O’Hare has eight runways and still doesn’t have to deal with the annual passenger load that Heathrow does.