All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of ... See more...
Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of each type - I'd be really interested to see. If the date format for the 2023 data is not as per the strptime format syntax that would cause a problem as it would be later - that would be my suspicion. If you can do a simple search for that 2023 data and do this  | eval orig_time=strftime(_time, "%F %T.%Q") | eval _time=strptime(...) | table _time orig_time that may show the difference
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This wo... See more...
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This worked well and validated to be fine. Thanks a lot to both
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data ... See more...
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data (2024 ) first. Hence for some columns the results are coming as per the data pushed in to splunk time. Any work around can be applied?
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity r... See more...
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), alias_fields=spath(value,"identifier.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| mvexpand alias_fields | eval field_value = spath(value,alias_fields."{}"), field_type="alias" | rename alias_fields as field_name ] | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where isnotnull(field_type) | table entity_id entity_name entity_title field_name field_value field_type This will give you results similar to this: To list the services, you can call the "getservice" custom command that comes with ITSI: | getservice | table title, serviceid, description, service_tags, kpis, service_depends_on, services_depending_on_me, enabled, base_service_template_id, entity_rules, * That gives you these results:   Cheers, Daniel  
@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version... See more...
@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version) | makeresults format=csv data="Deployed_Data_time,env,app,version 4/16/2024 15:29,axe1,app1,v-228 4/16/2024 15:29,axe1,app1,v-228 9/15/2023 8:12,axe1,app1,v-131 9/15/2023 8:05,axe2,app1,v-120 9/12/2023 1:19,axe2,app1, v-128 4/16/2024 15:29,axe2,app2,v-628 4/16/2024 15:26,axe2,app2,v-626 9/15/2023 8:12,axe2,app2,v-531 9/15/2023 8:05,axe1,app2,v-530 9/12/2023 1:19,axe1,app2, v-528" | eval _time=strptime(Deployed_Data_time, "%m/%d/%Y %H:%M") | stats latest(version) AS version BY app env | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them i... See more...
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them is scheduled, you can set its time range and schedule, so that it will onboard data from long ago in a single swoop. Did you do anything in the app interface to activate the "onboarding background search?"
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for bo... See more...
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for both the working server and your non-working windows machine, to see if the messages are received the same. Once you confirm that the logs are being received the same, you can move to seeing why Splunk is not then indexing them.  
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try... See more...
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try running it with the endpoint set to: https://<host>:8088/services/collector
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How c... See more...
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How can achieve this goal Thank you
There is always a chance of missing the event in some circumstances. For example if there is a huge lag due to some network outage or something similar and you get your events indexed with several ho... See more...
There is always a chance of missing the event in some circumstances. For example if there is a huge lag due to some network outage or something similar and you get your events indexed with several hours delay you won't find them when you're searching for recent events. But you can minimise the risk. The typical approach is to search every - let's say 15 minutes - over a "slightly delayed" window. For example - you search from 16 minutes ago to 1 minute ago. Or 17-2, depending on your typical ingestion latency.
If I understand you correctly, you are exporting the results of a search, then importing it in another Splunk instance as new data? This would definitely alter the fields. The exporting of search res... See more...
If I understand you correctly, you are exporting the results of a search, then importing it in another Splunk instance as new data? This would definitely alter the fields. The exporting of search results is not intended as a method to move data unchanged from one Splunk instance to another. Are you trying to import BOTS data or to package indexed data in a manner similar to the BOTS data?
anything in that line of thoughts be helpful to achieve this https://community.splunk.com/t5/Splunk-Search/How-to-convert-rows-to-columns/m-p/398009 
@gcusello   @PickleRick Thank you for the reply. We are sending data from application console to splunk through syslog and they define to send only error logs from their console. So If I schedule... See more...
@gcusello   @PickleRick Thank you for the reply. We are sending data from application console to splunk through syslog and they define to send only error logs from their console. So If I schedule to run at 15 mins frequency and 15 time range. Will there be any chance of missing events to be triggered. Our intention to get alert when ever there is new event and shouldn't repeat the same event in the alert.     
I have one Splunk instance where I ran a search and exported the data in a csv file, xml file, and a raw file. The data contained is mostly Windows event logs, "process command line", "creator proces... See more...
I have one Splunk instance where I ran a search and exported the data in a csv file, xml file, and a raw file. The data contained is mostly Windows event logs, "process command line", "creator process", ect. I am trying to import this data into another Splunk instance. When the data is imported, I noticed some fields are missing, like "process command line". I tried each file type and had no success. I also reviewed the data in the fields and all of the fields and values are present.    Essentially, I am trying to import data similar to Splunk BOTS  GitHub - splunk/botsv3: Splunk Boss of the SOC version 3 dataset.
Personal project
Why emulate ARM when Splunk doesn't support it?
@richgalloway I am running Lima VM with Rosetta. Is there a way to emulate amd64? Maybe there is a certain flag I can use?
Splunk Enterprise is not available for ARM processors.  FWIW, I run the standard Linux version of Splunk on my M2 Mac.
Hello @splunky_diamond, As stated by 2 folks, resource consumption depends on multiple factors. If you are planning to enable ~15 use cases in ES for learning purpose with all-in-one test environment... See more...
Hello @splunky_diamond, As stated by 2 folks, resource consumption depends on multiple factors. If you are planning to enable ~15 use cases in ES for learning purpose with all-in-one test environment, 32 GB RAM, 32 vCPU, and 200 GB hard disk should be enough. Base configuration for ES is as below -  https://docs.splunk.com/Documentation/ES/7.3.1/Install/DeploymentPlanning
Better to raise Splunk Support case for better troubleshooting