Hacker News new | past | comments | ask | show | jobs | submit login

Some methods I've commonly seen in Enterprise Duct Tape:

Screen scrape the other service and do data exchange via a Selenium script.

Directly interact with the other service's database.

CSV files and nightly batch jobs.




Oh damn you just reminded me. They tried to bring in "Robotics" (e.g. Blue Prism) here. For in-house apps.

Apparently it was so hard to deal with the developers that instead of exposing an API they would automate clicking around on Internet Explorer browser windows.

Thankfully I haven't heard much about it lately.


I raise you a service which gets it's configuration from a table on confluence page


It's funny to think about but the reality is that it's better than a lot of other options.

- The page has automatic history & merge conflict resolution - There's built-in role-based security to control both visibility and actions (read-only vs. read-write) - You can work on a draft and then "deploy" by publishing your changes - You can respond to hooks/notifications when the page is updated

Considering that the alternative is either editing a text file on disk or teaching business users to use git, a Confluence page is not so bad.


As crazy as that may seem on the face of it, it’s actually kind of genius for merging the roles of those who need to configure / don’t get git, and those who need to develop against the configuration.


Confluence is at least canonically XHTML so this is better than a lot of data lake bullshit I've seen.


DAAC - Documentation As A Configuration


and I thought the screen scraping / Selenium solution was wack! Wow!


This is one of the niches where (S)FTP and batch processing is still alive and kicking.


Yeah, SFTP + CSV file is still the standard for enterprise software.

The problem is that these kinds of things have to be built to the lowest common denominator, which is usually the customer anyway. The customer in enterprise software is usually not a tech company they typically have outdated IT policies and less skilled developers than a pure tech company would have. Even if the developers are capable of doing something like interacting with a queue they also need to be supported by a technology organization which can deal with that type of interaction.

Some times you get lucky and someone in the past has pushed for that kind of modernization. Or your project really won't work without a more advanced interaction model and you have someone in the organization willing to go to bat for tech enhancement.

But otherwise the default is "Control-M job to consume/produce a CSV file from/onto an SFTP"


My experience is that usual reason for RPC-over-SFTP is that it is the only thing that corporate IT security does not control and thus cannot make inflexible. Adding another SOAP/JSON/whatever endpoint tends to be multiyear project, while creating new directory on shared SFTP server is a way to implement the functionality in few hours.


its also quite common in fixed data logging applications, such as exports from bms.


Directly interacting with other service's DB is an "Enterprise Integration Pattern" AFAIR? It can make sense in lots of cases.


Shared database is, in fact,a classic enterprise integration pattern, and much of classic relational database engineering assumes that multiple application will share the same database; in the classic ideal, each would use it through a set of (read/write, ad needed) application-specific views so as to avoid exposing implementation details of the base tables and to permit each application, and the base database, to change with minimum disruption to any of the others.


Reading data from another application's database is pretty common (although even this can cause chaos if done without some care) but writing to application databases is often a very bad idea and often explicitly forbidden by CRM/ERP vendors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: