All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick  End to end means from the moment the files are picked up, to the point they hit the target database. My intention is not to read or parse the file, instead, to make sure, for example, i... See more...
@PickleRick  End to end means from the moment the files are picked up, to the point they hit the target database. My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it would be understand how many failed, failed at which stage etc. 
@gcusello  My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it ... See more...
@gcusello  My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it would be understand how many failed, failed at which stage etc. 
@gcuselloWell, this ain't that easy While the format itself is relatively easy, it is - to some extent - a structured data. So it's problematic to parse it properly preserving the relationship be... See more...
@gcuselloWell, this ain't that easy While the format itself is relatively easy, it is - to some extent - a structured data. So it's problematic to parse it properly preserving the relationship between entities (there might be - for example - several instances of a "name" or "address" field, each related to another person within the same record. So it's more complicated than it looks. Sure, it can be ingested but it's more complicated to parse it properly to not make a big mess of it. Also @pradeepiyer2024 - what do you mean by "monitor end to end"?
Yes, but it makes no sense to add another layer of processing since you're gonna go through every event anyway. So the best approach here would be to do your basic search | lookup enriching your d... See more...
Yes, but it makes no sense to add another layer of processing since you're gonna go through every event anyway. So the best approach here would be to do your basic search | lookup enriching your data | filter out data not matching your criteria based on lookup values  
https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection#Add_a_data_input_using_a_REST_API Build the data collection for your add-on to gather data from a REST API... See more...
https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection#Add_a_data_input_using_a_REST_API Build the data collection for your add-on to gather data from a REST API. A REST data input uses JSON as a data type and supports basic authentication and API-based authentication. For advanced data collection, create a modular input by writing your own Python code. So if your source returns XML.. well, you're on your own here.
You're right, but it'll run every 15 minutes for a limited amount of data, so we can suffer the performance issue
@PickleRick I appreciate your reply. The add-on builder option is what I'll go with. But will the add-on option work with XML data, given the data type is XML and the Splunk documentation only discus... See more...
@PickleRick I appreciate your reply. The add-on builder option is what I'll go with. But will the add-on option work with XML data, given the data type is XML and the Splunk documentation only discusses JSON format? If so, do I need to apply the same "JSON path formats"? If not, can you kindly provide the formats or a reference guide?
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup comm... See more...
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup command it won't catch the malware_signature with the wildcards  
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup comm... See more...
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup command it won't catch the malware_signature with the wildcards.  
Hello, I checked your suggestion, but it did not solve my problem. There are about 200 hosts and about 3% are affected. (on the Syslog server everything works flawlessly.) I have the same type of d... See more...
Hello, I checked your suggestion, but it did not solve my problem. There are about 200 hosts and about 3% are affected. (on the Syslog server everything works flawlessly.) I have the same type of device logs which are not affected. For me, it's a random issue of the forwarding...   Kind regards, Norbert
You need to use another search to (conditionally) set the result field to null and use the results tokens from the search. For example: { "visualizations": { "viz_F0Z9jsFU": { ... See more...
You need to use another search to (conditionally) set the result field to null and use the results tokens from the search. For example: { "visualizations": { "viz_F0Z9jsFU": { "type": "splunk.singlevalue", "title": "Speed in km/h", "dataSources": { "primary": "ds_eZ6LtdAT" }, "options": { "unit": "km/h" } }, "viz_p4tcz9dd": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_RopRdHUj" }, "title": "Speed in mph", "options": { "unit": "mph" }, "description": "$km per hour : result.kmh$ * 0.621" } }, "dataSources": { "ds_eZ6LtdAT": { "type": "ds.search", "options": { "query": "| makeresults\n| eval kmh=random()%100", "enableSmartSources": true }, "name": "km per hour" }, "ds_RopRdHUj": { "type": "ds.search", "options": { "query": "| makeresults\n| eval speed_in_mph = if($km per hour:result.kmh$ > 50, $km per hour:result.kmh$ * 0.621, null())\n| table speed_in_mph" }, "name": "kmh to mph" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_F0Z9jsFU", "type": "block", "position": { "x": 10, "y": 10, "w": 250, "h": 250 } }, { "item": "viz_p4tcz9dd", "type": "block", "position": { "x": 290, "y": 10, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Evaluate tokens using search" } Adapted from documentation example here 
Hi @pradeepiyer2024, probably Splunk Enteprie can help you: has the tool you're using a log to read? if yes, you can read it using Splunk. did you enable file logging in your Windows sustem? if ye... See more...
Hi @pradeepiyer2024, probably Splunk Enteprie can help you: has the tool you're using a log to read? if yes, you can read it using Splunk. did you enable file logging in your Windows sustem? if yes, youcan read the file system og with Spunk, your EDI fiels are in plain text, so you could read them with Splunk, even if I don't know how this could help. Ciao. Giuseppe
Hi @beechnut. as you can read at https://www.splunk.com/en_us/blog/it/why-you-should-migrate-from-legacy-apps-to-it-essentials-work.html?_gl=1*zwrq35*_ga*MTc3ODk5Njk1My4xNzE1NjEwNDQy*_ga_GS7YF8S63Y*... See more...
Hi @beechnut. as you can read at https://www.splunk.com/en_us/blog/it/why-you-should-migrate-from-legacy-apps-to-it-essentials-work.html?_gl=1*zwrq35*_ga*MTc3ODk5Njk1My4xNzE1NjEwNDQy*_ga_GS7YF8S63Y*MTcxNzMxOTcyNC45My4xLjE3MTczMTk5NDguNTcuMC4w*_ga_5EPM2P39FV*MTcxNzMxOTYzNi45Mi4xLjE3MTczMTk5NTAuMC4wLjM4ODM4OTk5Mg..&_ga=2.170677219.279385915.1715610443-1778996953.1715610442&locale=en_us it isn't possible to download this app (I don't know why because it was very useful!). See if the IT Essential Work App (https://splunkbase.splunk.com/app/5403), can satisfy your requirenets. Ciao. Giuseppe
You have three options here. 1. Create a completely external script that will pull data from your REST endpoint and write it to a file for ingestion by file monitor input or send it to a HEC input. ... See more...
You have three options here. 1. Create a completely external script that will pull data from your REST endpoint and write it to a file for ingestion by file monitor input or send it to a HEC input. That's probably the easiest (in the quick and dirty sense) but least maintainable one. 2. Create a modular input manually - https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ 3. Use add-on builder to create modular input - https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection
Hi @Josh1890 , as also @PickleRick said, it's diefferent to search and to entich, using my solution you search for the patterns contained in your lookup. If you need to agg the classification, the ... See more...
Hi @Josh1890 , as also @PickleRick said, it's diefferent to search and to entich, using my solution you search for the patterns contained in your lookup. If you need to agg the classification, the only way is to use the lookup command. Ciao. Giuseppe
I would expect Splunk to keep some form of cache (you can't expect it to query DNS for every single UDP incoming packet. That would be silly. I wouldn't bet my money either that it doesn't have its o... See more...
I would expect Splunk to keep some form of cache (you can't expect it to query DNS for every single UDP incoming packet. That would be silly. I wouldn't bet my money either that it doesn't have its own resolver independent from the OS (like Java does for example). Having said that. 1. Identifying hosts by names is usually more error-prone than using IPs 2. With syslog sources you often have transforms overwriting the host field with the value parsed from within the event (and that might affect your case as well) 3. It's not a good idea to receive syslogs directly on your indexers (or even forwarders). It's better to use intermediate syslog daemon writing to files or sending to HEC (sc4s or properly configured "raw" syslog-ng or rsyslog). 4. As you're saying that you're sending syslogs to "indexer cluster" I suspect you have some kind of LB in front of those indexers. That's not a good idea usually. Typical load balancers don't handle syslog traffic (especially UDP) well.
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I ... See more...
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I am unable to and I get the message: "This app restricts downloads to a defined list of users. Your user profile was not found in the list of authorized users." Is there a way around this? How can I get hold of the Splunk App for Windows Infrastructure 2.0.4? Kind regards, Jos  
This solution is for classic dashboard. How do it in Dashboard studio ?
These are two different things. One thing is generating conditions using subsearch, another thing is enriching you results with a lookup. Important thing though, generating conditions where search t... See more...
These are two different things. One thing is generating conditions using subsearch, another thing is enriching you results with a lookup. Important thing though, generating conditions where search term has a wildcard at the beginning makes no sense performancewise. Splunk still has to read all events from the index and search them one by one. It cannot use indexed structures.
I am confused.  If you want to enrich data with classification, why use inputlookup? Just create a lookup with match_type=WILDCARD(malware_signature) if you haven't. In your third search, I see that ... See more...
I am confused.  If you want to enrich data with classification, why use inputlookup? Just create a lookup with match_type=WILDCARD(malware_signature) if you haven't. In your third search, I see that you have defined a lookup named malware_list.csv.  If so, you must have missed MATCH_TYPE. (See Create a CSV lookup definition) Then, use lookup command instead of inputlookup. ``` your search that returns malware_signature ``` | lookup malware_list.csv malware_signature | where isnotnull(classification)