All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Old post how you could emulate joins in splunk https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948
Please do not tag me - I, like many here, volunteer my time and expertise and it is not for others to suggest what I work on. By specifically addressing people, you are also potentially excluding oth... See more...
Please do not tag me - I, like many here, volunteer my time and expertise and it is not for others to suggest what I work on. By specifically addressing people, you are also potentially excluding others who may have valuable contributions to make; it is like you don't value or are not interested in their efforts (since you haven't also directly addressed them). I imagine this can be counter-productive to resolving your issue!
Hi @MustakMU  Did you make any changes to app.conf between the old version of your app and new version? Did you try installing the old version of the app on the same version of Splunk that you are ... See more...
Hi @MustakMU  Did you make any changes to app.conf between the old version of your app and new version? Did you try installing the old version of the app on the same version of Splunk that you are using now to install the new version? If so you can confirm the old one did not ask for a restart? It might help to post your app.conf as some settings here can cause a restart.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Try adding this additional ARG on startup - I don't know the syntax for docker so google it please.   --no-prompt
04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Lines Processed: 1   It appears to me t... See more...
04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Lines Processed: 1   It appears to me that it made a successful connection and request for data download, but that download was empty.  I'm not an Akamai expert but perhaps you are requesting the wrong account/category of logs so revalidate the input config on the HF. All the logs are INFO or higher so no DEBUG level logs.  You could potentially change logging levels from INFO to DEBUG in order to catch additional details about the connection and request that could be helpful.
In my research you need 3 values and your table only as 2.  Heat maps are representing change of value over interval of time by category.   | timechart span=<interval> max(<value>) as perc by <fiel... See more...
In my research you need 3 values and your table only as 2.  Heat maps are representing change of value over interval of time by category.   | timechart span=<interval> max(<value>) as perc by <field_name>
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for resta... See more...
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for restart, earlier it never used to ask for restart during first time, only when we upgrade the app it used to ask for restart. Also I'm not using input.conf file in my add-on 
Hi @Punnu  If you wanted a count of the unique messageID after filtering then a simple stats count should do, as we've already stats by messageID | stats count   Did this answer help you? If ... See more...
Hi @Punnu  If you wanted a count of the unique messageID after filtering then a simple stats count should do, as we've already stats by messageID | stats count   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @karn  I would have said clone the input, but you've done that? So the name of the input is definately different than the original? The reason I ask is that looking at the code, the checkpoint na... See more...
Hi @karn  I would have said clone the input, but you've done that? So the name of the input is definately different than the original? The reason I ask is that looking at the code, the checkpoint name is created based on the input name.  The two checkpoint files that the Generic S3 Input creates (key/index ckpt) are stored in the checkpoint directory (typically $SPLUNK_HOME/var/lib/splunk/modinputs/aws_s3) - Can you check in there to see what you have in there? You could stop Splunk and clear the relevant modinput checkpoint files and then start Splunk again. If that doesnt work then check you _internal logs for any errors, or more info about which data it is pulling in when the input runs.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
I change interval time to 600s also.
Hi @tech_g706  Those searches are accelerating the Data Models, presumably you are using Splunk Enterprise security? I think the first thing to check is are you actually using all of those models fo... See more...
Hi @tech_g706  Those searches are accelerating the Data Models, presumably you are using Splunk Enterprise security? I think the first thing to check is are you actually using all of those models for your ES rules/searches? Secondly I would check that you have set the specific required indexes in the allowed index list for each of your Data Models in the "CIM Setup" section of ES, by default these are set to index=* but should be configured to only access the indexes that contain the relevant data for the particular data model.  Check out these docs for more information on managing data models in ES. The last thing I would check is the Data Model audit dashboard (Audit > Data Model Audit) in ES, this should give you some stats on how the DM are behaving and if they are updating correctly.  You can also check out https://docs.splunk.com/Documentation/ES/8.0.2/Install/ConfigureDatamodels#Data_model_acceleration_rebuild_behavior which has some further details on the configuration options such as the summary period for each data model. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable ... See more...
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable it but no data coming. I tried clone and changed only the name. No data coming also. I don't know how to check it. Is it checkpoint issue or somethings? Please help me to check it. Thanks for Advance
Hi @tech_g706 , it isn't possible to optimize accelerated scheduled searches, you can only reduce the execution frequency, if this is compatible with your requisites. E.G. if you schedule accelerat... See more...
Hi @tech_g706 , it isn't possible to optimize accelerated scheduled searches, you can only reduce the execution frequency, if this is compatible with your requisites. E.G. if you schedule acceleration searches every 10 or 15 minutes, instead of 5, you will have avalible data later than now, so you must change the execution time window of your Correlation Searches. In other words, if having a frequency of 5 minutes, you can use a time period from -10m@m to -5m@m  , having a frequency of 15 minutes, you must schedule Correlation Searches from -20m@m to -15m@m, is this acceptable for you? Otherwise, you have to use summariesonly=false, but you lose in performances. Ciao. Giuseppe
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. So... See more...
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. Sometimes two or three belongs to one app. Approx they have 200 config IDs and they want to restrict users from not seeing other config ID logs. So they are asking to create 200 indexes with config id in index name and can restrict based on that. But according to my knowledge...having more indexes is not a good idea. It needs more maintainance and stuff like that. So what am thinking is while configuring data input I can name with config accordingly so that it will come under 'Source' field and a single index for all of them. When creating role I will be assigning that index and in restrictions I will be giving search filter that belongs to individual user. My question is will this work as expected? Anyone already following this please confirm. Even if we restrict A user with common index=X and Source=123456 (config ID) and save it... If he give index=A in search still he can see all config ID logs or only 123456 ID logs? Please confirm. Any other alternative idea also please help me.
To offer a more precise answer to this question for anyone referencing it in the future. First, download the Splunk package to your local system. Next, open the command prompt, PowerShell, or any te... See more...
To offer a more precise answer to this question for anyone referencing it in the future. First, download the Splunk package to your local system. Next, open the command prompt, PowerShell, or any terminal you prefer. Use the 'cd' command to navigate to the directory where the Splunk package is located. After reaching the correct directory, execute the following command: cd .\Downloads\ For Linux and macOS: sha256sum splunk*.tgz   For Windows (PowerShell): Get-FileHash -Algorithm SHA256 splunk*.tgz  
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using S... See more...
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using Splunk ES, SA-NetworkProtection apps.  By CMC, these are the most expensive ones and take around 30-40 minutes to complete.  _ACCELERATE_DM_Splunk_SA_CIM_Authentication_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Network_Traffic_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Vulnerabilities_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Endpoint.Services_ACCELERATE _ACCELERATE_DM_Splunk_SA_CIM_Network_Sessions_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Change_ACCELERATE_ _ACCELERATE_DM_SA-NetworkProtection_Domain_Analysis_ACCELERATE_ _ACCELERATE_DM_DA-ESS-ThreatIntelligence_Threat_Intelligence_ACCELERATE_ Any recommendations on how I can optimize without disabling them? Thank you
I'd also like to know. Other tools have this functionality. The existing splunk implementation with javascript is really complicated
@ITWhisperer   Can you please help me in this topic.   
hi @livehybrid  Thank you for reply . I would like to ask one more question . Post filtering  out records how we can find count of messageID 
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ... See more...
Assuming your search is already using time input to set the time frame, the search can override this as shown below index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console ( TERM(VVF006H) OR TERM(VVF003H) OR TERM(VVZJ1BH) OR TERM(VVZJ1CH) OR TERM(VVZJ1DH) OR TERM(VVZJ1EH) OR TERM(HVVZK3A) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") [| makeresults | addinfo | eval earliest=relative_time(info_min_time,"-17h@d+17h") | eval latest=relative_time(earliest,"+24h") | table earliest latest]