All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

On-prem Splunk does not _need_ anything from the internet to work correctly. Some functionalities can be useful (like aforementioned updates for apps) but they are not obligatory and are often done ... See more...
On-prem Splunk does not _need_ anything from the internet to work correctly. Some functionalities can be useful (like aforementioned updates for apps) but they are not obligatory and are often done different way around (manually downloading app/software update from Splunk site and uploading it directly to server(s). Of course if you can directly use external services in inputs, external lookups, actions and so on but it's up to you.
I suppose you want to extract the host part from the filename in source field. You didn't specify it in your transform - it's matching the raw event. You need SOURCE_KEY = MetaData:Source
hello,   Please write or send me document link which internet endpoints (URL, port) Splunk SIEM needs access to in order to function properly, download updates, apps, and anything else required for... See more...
hello,   Please write or send me document link which internet endpoints (URL, port) Splunk SIEM needs access to in order to function properly, download updates, apps, and anything else required for its normal operation.
in regex101.com, tested below REGEX it was working Updated below props.conf and transforms.conf in deployment server and 2 heavy forwarders as well, but not working props.conf [nix:messages] TRAN... See more...
in regex101.com, tested below REGEX it was working Updated below props.conf and transforms.conf in deployment server and 2 heavy forwarders as well, but not working props.conf [nix:messages] TRANSFORMS-set_host = set_custom_host transforms.conf [set_custom_host] REGEX = /TUC-[^/]+/[^/\n]+/([^-\n]+(?:-[^-\n]+){0,3})-(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})-\d{2}-\d{2}-\d{4}\.log FORMAT = host::$1 DEST_KEY = MetaData:Host   /TUC-RST50/OOB/TUC-RST50M01ZTDCGDG01-01U-01-55.66.77.888-20-03-2025.log /TUC-SNK50/OOB/TUC-RST50N03ZTLEFCG02-20U-SRV02-44.55.66.777-21-03-2025.log /TUC-TYB50/OOB/TUC-RST50S03ZTLEFDB0B-20U-SRV01-33.44.55.666-21-03-2025.log /TUC-RST50/firewall/TUC-RST50M01ZTCOMDE0C-30U-EMSFW01-22.33.44.555-22-03-2025.log /TUC-SNK50/OOB/TUC-RST50M01FTIFW-11.22.33.444-22-03-2025.log   BELOW output should get updated in the host field TUC-RST50M01ZTDCGDG01-01U-01 TUC-RST50N03ZTLEFCG02-20U-SRV02 TUC-RST50S03ZTLEFDB0B-20U-SRV01 TUC-RST50M01ZTCOMDE0C-30U-EMSFW01 TUC-RST50M01FTIFW
Making this adjustment was just what I needed. I noticed that as I started playing with fields I could change the results, but I was focusing on the secondary query as opposed to the base query. Than... See more...
Making this adjustment was just what I needed. I noticed that as I started playing with fields I could change the results, but I was focusing on the secondary query as opposed to the base query. Thank you all for the help and advice.
One caveat - it will obviously _not_ work if your IF receives parsed data (from HF, not UF). You can bend-over backwards and re-parse already parsed data but it's not a great idea.
Generally, the base search should be a transforming search and it shouldn't be too big. But if it's a normal event search, you should explicitly list fields you'll be using later (as @catdadof3 point... See more...
Generally, the base search should be a transforming search and it shouldn't be too big. But if it's a normal event search, you should explicitly list fields you'll be using later (as @catdadof3 pointed out - with fields or table command).
I was able to replicate your problem - looks like if you use a table or fields command with the fields you need underneath the index search you can get results.   <search id="recycle"> <query... See more...
I was able to replicate your problem - looks like if you use a table or fields command with the fields you need underneath the index search you can get results.   <search id="recycle"> <query> index=o365_sharepoint AND (Operation=FileRecycled OR Operation=FolderRecycled OR Operation=FileVersionsAllDeleted) | fields UserId Whateverotherfields </query> <earliest>-7d@h</earliest> <latest>now</latest> </search>  
Does it work if you use any other command in the query? E.g. just "| stats count"   Also what version of Splunk are you using, out of curiosity?
Hello, That's actually where I started this. I took a functioning panel with the full query and then ripped out the primary section for the base search. I also tried creating a new dashboard from sc... See more...
Hello, That's actually where I started this. I took a functioning panel with the full query and then ripped out the primary section for the base search. I also tried creating a new dashboard from scratch and get the same empty results. The only thing I can do to so something displays is to comment out all of   <query> | stats count as "Object Deletions" BY UserId | search "Object Deletions" &gt; 50 | sort - "Object Deletions" </query> If I leave any part of that code in, it fails.
I think it's a configuration issue ill open ticket I ran the command on the dm and got no results I did also go back through the default indexes.conf file on the indexer and saw that its still on ver... See more...
I think it's a configuration issue ill open ticket I ran the command on the dm and got no results I did also go back through the default indexes.conf file on the indexer and saw that its still on version 9.2.0 and did not get updated to 9.4.0. 
I copied your dashboard into my test instance and modified the base search to find events, and it worked.   As a test, could you try saving your full search as a dashboard panel for a new dashboard... See more...
I copied your dashboard into my test instance and modified the base search to find events, and it worked.   As a test, could you try saving your full search as a dashboard panel for a new dashboard, then editing the source of that new dashboard to move the first half of the search into a base query?
Hello folks, I trying to use a base search within a dashboard but it consistently returns no results. However, when I click Open in Search the results appear as expected. Any of you fine people have... See more...
Hello folks, I trying to use a base search within a dashboard but it consistently returns no results. However, when I click Open in Search the results appear as expected. Any of you fine people have any suggestions? <dashboard version="1.1" theme="dark"> <search id="recycle"> <query> index=o365_sharepoint AND (Operation=FileRecycled OR Operation=FolderRecycled OR Operation=FileVersionsAllDeleted) </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <label>Test Dashboard</label> <row> <panel> <title>Abnormal File Deletion and Recycle Patterns</title> <table> <search base="recycle"> <query> | stats count as "Object Deletions" BY UserId | search "Object Deletions" &gt; 50 | sort - "Object Deletions" </query> </search> <option name="drilldown">cell</option> </table> </panel> </row> </dashboard>  
Logstash is an external tool which doesn't have direct native support for Splunk. You can however configure serveral different types of outputs which you can use it to send data to Splunk - syslog, h... See more...
Logstash is an external tool which doesn't have direct native support for Splunk. You can however configure serveral different types of outputs which you can use it to send data to Splunk - syslog, http or simply writing to files and piking up the data from files with monitor inputs using Splunk's UF. But. As @gcusello pointed out - since Logstash is an external tool meant for something completely different than being used with Splunk, it prepares data its own way and generally its output is not compatible with what normal Splunk apps expect. So you could be better off - especially if you're receiving data over syslog - "branching" your event ingestion pipeline before logstash. So that logstash, if you're using it as well for some other solution, receives its copy of data and another copy is sent from before logstash to your Splunk environment (especially to some syslog receiver).
Thank you for the response.  I have been testing this all day so far today and to be honest nothing I do with the props and transforms is having any effect. I do know that the IF is sending logs bec... See more...
Thank you for the response.  I have been testing this all day so far today and to be honest nothing I do with the props and transforms is having any effect. I do know that the IF is sending logs because if I add /etc/system/local/inputs.conf [WinEventLog] _meta = GUIDe::123456 Project_ID::654321 the logs get tagged and indexed.  This does not work for me because it will double tag the logs since the client is adding its proper project and guide tags and then the IF add another project and guide tag. Nothing I do in the props and transforms seems to change the fact that the logs are not being tagged from the IF itself the logs still get properly forwarders and indexed since they were already tagged by the client UF. I really don't know why the transform is not working. Even if I do this it still has no effect == props.conf == [default] TRANSFORMS-setCustomMetadata=setThisHostMetadata == transforms.conf == [setThisHostMetadata] INGEST_EVAL = GUIDe:=COALESCE(GUIDe,"999-999-999"), ProjectID:=COALESCE(ProjectID,"MyForwarderLayer")  
Unfortunately, your data is of the "ugly" kind - json content with additional non-json elements. So you cannot use native json parsing. There is an idea - https://ideas.splunk.com/ideas/EID-I-208 be... See more...
Unfortunately, your data is of the "ugly" kind - json content with additional non-json elements. So you cannot use native json parsing. There is an idea - https://ideas.splunk.com/ideas/EID-I-208 being in a "future prospect" state so we can hope this behaviour will be changed and there will be a possibility to easily manipulate such data. But for now you have more or less three possibilities of handling such data: 1) Strip the non-json part so that the whole of the event you have left is a full well-formed json structure. (kinda what @gargantua suggested). Of course this way you're bound to lose some data 2) Do manual regex-based extractions. That's rarely a good idea to hack with regex at structured data. Usually sooner or later ends with tears 3) Use explicit SPL to parse out the json part to a field and then throw spath at this field so the json is getting parsed. Unfortunately it complicates your search and makes it way worse performancewise since you have to parse all events to find those matching.
While hot/warm and cold storage can be on the same device there is still sense of using cold space. Most importantly, cold is the only storage that allows you to limit the data by bucket age. So with... See more...
While hot/warm and cold storage can be on the same device there is still sense of using cold space. Most importantly, cold is the only storage that allows you to limit the data by bucket age. So with hot/warm and cold on the same physical storage it's probably most reasonable to keep warm small and keep most data in cold so its retention can be flexibly configured.
Splunk server is not the same as the Splunk software running on it. You can limit connectivity on the Splunk server using iptables/firewalld/Windows Firewall...
@MBristow7  I can see this app is compatible for Splunk Enterprise not for Splunk Cloud but this app has been archived. 
The Splunk App for Salesforce, is it compatible with Splunk Cloud?