All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version... See more...
@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version) | makeresults format=csv data="Deployed_Data_time,env,app,version 4/16/2024 15:29,axe1,app1,v-228 4/16/2024 15:29,axe1,app1,v-228 9/15/2023 8:12,axe1,app1,v-131 9/15/2023 8:05,axe2,app1,v-120 9/12/2023 1:19,axe2,app1, v-128 4/16/2024 15:29,axe2,app2,v-628 4/16/2024 15:26,axe2,app2,v-626 9/15/2023 8:12,axe2,app2,v-531 9/15/2023 8:05,axe1,app2,v-530 9/12/2023 1:19,axe1,app2, v-528" | eval _time=strptime(Deployed_Data_time, "%m/%d/%Y %H:%M") | stats latest(version) AS version BY app env | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them i... See more...
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them is scheduled, you can set its time range and schedule, so that it will onboard data from long ago in a single swoop. Did you do anything in the app interface to activate the "onboarding background search?"
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for bo... See more...
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for both the working server and your non-working windows machine, to see if the messages are received the same. Once you confirm that the logs are being received the same, you can move to seeing why Splunk is not then indexing them.  
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try... See more...
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try running it with the endpoint set to: https://<host>:8088/services/collector
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How c... See more...
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How can achieve this goal Thank you
There is always a chance of missing the event in some circumstances. For example if there is a huge lag due to some network outage or something similar and you get your events indexed with several ho... See more...
There is always a chance of missing the event in some circumstances. For example if there is a huge lag due to some network outage or something similar and you get your events indexed with several hours delay you won't find them when you're searching for recent events. But you can minimise the risk. The typical approach is to search every - let's say 15 minutes - over a "slightly delayed" window. For example - you search from 16 minutes ago to 1 minute ago. Or 17-2, depending on your typical ingestion latency.
If I understand you correctly, you are exporting the results of a search, then importing it in another Splunk instance as new data? This would definitely alter the fields. The exporting of search res... See more...
If I understand you correctly, you are exporting the results of a search, then importing it in another Splunk instance as new data? This would definitely alter the fields. The exporting of search results is not intended as a method to move data unchanged from one Splunk instance to another. Are you trying to import BOTS data or to package indexed data in a manner similar to the BOTS data?
anything in that line of thoughts be helpful to achieve this https://community.splunk.com/t5/Splunk-Search/How-to-convert-rows-to-columns/m-p/398009 
@gcusello   @PickleRick Thank you for the reply. We are sending data from application console to splunk through syslog and they define to send only error logs from their console. So If I schedule... See more...
@gcusello   @PickleRick Thank you for the reply. We are sending data from application console to splunk through syslog and they define to send only error logs from their console. So If I schedule to run at 15 mins frequency and 15 time range. Will there be any chance of missing events to be triggered. Our intention to get alert when ever there is new event and shouldn't repeat the same event in the alert.     
I have one Splunk instance where I ran a search and exported the data in a csv file, xml file, and a raw file. The data contained is mostly Windows event logs, "process command line", "creator proces... See more...
I have one Splunk instance where I ran a search and exported the data in a csv file, xml file, and a raw file. The data contained is mostly Windows event logs, "process command line", "creator process", ect. I am trying to import this data into another Splunk instance. When the data is imported, I noticed some fields are missing, like "process command line". I tried each file type and had no success. I also reviewed the data in the fields and all of the fields and values are present.    Essentially, I am trying to import data similar to Splunk BOTS  GitHub - splunk/botsv3: Splunk Boss of the SOC version 3 dataset.
Personal project
Why emulate ARM when Splunk doesn't support it?
@richgalloway I am running Lima VM with Rosetta. Is there a way to emulate amd64? Maybe there is a certain flag I can use?
Splunk Enterprise is not available for ARM processors.  FWIW, I run the standard Linux version of Splunk on my M2 Mac.
Hello @splunky_diamond, As stated by 2 folks, resource consumption depends on multiple factors. If you are planning to enable ~15 use cases in ES for learning purpose with all-in-one test environment... See more...
Hello @splunky_diamond, As stated by 2 folks, resource consumption depends on multiple factors. If you are planning to enable ~15 use cases in ES for learning purpose with all-in-one test environment, 32 GB RAM, 32 vCPU, and 200 GB hard disk should be enough. Base configuration for ES is as below -  https://docs.splunk.com/Documentation/ES/7.3.1/Install/DeploymentPlanning
Better to raise Splunk Support case for better troubleshooting
Hello @dc18, have you checked https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatch and searched for EC2 on the page?
Hi,  I am trying to run Splunk using kubernetes on my M3 mac. When executing the command: (as described here https://github.com/splunk/splunk-operator/blob/main/docs/README.md#installing-the-splunk-... See more...
Hi,  I am trying to run Splunk using kubernetes on my M3 mac. When executing the command: (as described here https://github.com/splunk/splunk-operator/blob/main/docs/README.md#installing-the-splunk-operator) cat <<EOF | kubectl apply -n splunk-operator -f - apiVersion: enterprise.splunk.com/v4 kind: Standalone metadata: name: s1 finalizers: - enterprise.splunk.com/delete-pvc EOF I am getting the error:  Failed to pull image "splunk/splunk:9.1.3": no matching manifest for linux/arm64/v8 in the manifest list entries   What do I need to do?
I have some configurations in local app.conf and I would like to read them pragmatically. before streaming events How to do it using python? Thanks!
i forgot is what i'm using apps... so sorry... i tried to universial forwarder apps and try to figure out it  thanks to advice