All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, In our environment, we have created use cases in the content management in Splunk ES. We want to know the query to search for the logs if anyone with Admin access made any changes in th... See more...
Hello Team, In our environment, we have created use cases in the content management in Splunk ES. We want to know the query to search for the logs if anyone with Admin access made any changes in the use cases by mistake. I will explain in detail, someone with admin access had made a change in the use case. To check who changed it, I was trying in splunk _internal with query, index="_internal" sourcetype=*content_management* But i am not getting any useful data with this query.  Please kindly help me where all logs stored for content management(use cases) in Enterprise security. How to search those logs, if anyone have any idea with query pls let me help with it. We have to check the internal logs for the changes being made in the content management. Thanks in advance. Bye Bye !
I see an interesting Simple XML idiom below: <input type="multiselect" token="multiselect_lines" searchWhenChanged="true"> <label>Lines</label> <choice value="ACEKLMRSWY">All lines</choice> <choice ... See more...
I see an interesting Simple XML idiom below: <input type="multiselect" token="multiselect_lines" searchWhenChanged="true"> <label>Lines</label> <choice value="ACEKLMRSWY">All lines</choice> <choice value="A">A Line</choice> <choice value="C">C Line</choice> <choice value="E">E Line</choice> <choice value="K">K Line</choice> <choice value="L">L Line</choice> <choice value="M">M Line</choice> <choice value="R">R Line</choice> <choice value="S">S Line</choice> <choice value="W">W Line</choice> <choice value="Y">Y Line</choice> <default>ACEKLMRSWY</default> <prefix>regex Location="^[</prefix> <suffix>]"</suffix> <change> <eval token="form.multiselect_lines"> case( mvcount('form.multiselect_lines') == 2 AND mvindex('form.multiselect_lines', 0) == "ACEKLMRSWY", mvindex('form.multiselect_lines', 1), mvfind('form.multiselect_lines', "ACEKLMRSWY") == mvcount('form.multiselect_lines') - 1, "ACEKLMRSWY", true(), 'form.multiselect_lines')</eval> </change> </input> It seems updating the appearance of the multiselect field "multiselect_lines" so whenever the selections in the multiselect change, "form.multiselect_lines" will be updated accordingly. I guess that it is supposed to solve the deficiency of multiselect in Splunk that the option of "All" does not disappear automatically when a subset is selected, or when there is no more subset selected, "All" as default does not come back automatically. The above is my trying to understand to achieve the functionality. It works as hypothesized in a dashboard that I'm studying, but when I copied the mechanism to my dashboard, it has no effect in the behavior. So I under what the token with the pattern of form.<multiselect_input_token>, and what does it take to have the above mechanism work in auto removing and adding "All" in appearance? I know that there is a javascript solution by modifying the list of multiselect options on the fly through Javascript. But I don't have the admin privilege to add the javascript for my dashboard. So a solution without requiring admin privilege is handy.
Hi, I am trying to monitor data from about 200 servers diff sources. What is the best way to do this easily and efficiently. I am on a time crunch. Any help will be fantastic. I understand that put... See more...
Hi, I am trying to monitor data from about 200 servers diff sources. What is the best way to do this easily and efficiently. I am on a time crunch. Any help will be fantastic. I understand that putting a universal forward the sever will pull data to the indexer. But I cant do that for over 200 servers. HELP. Thanks
Hello folks, I am new to splunk. We need to change log indexing from 5 to 3 on Splunk Enterprise for Windows.  It is safe to change all "maxBackupIndex" keys directly in /etc/log.cfg or there is ... See more...
Hello folks, I am new to splunk. We need to change log indexing from 5 to 3 on Splunk Enterprise for Windows.  It is safe to change all "maxBackupIndex" keys directly in /etc/log.cfg or there is a better way to achieve that goal? Thank you
Hello folks, we have some linux machines with UF installed on that connect to our search head. We haven't access to those machines. There is some SPL query that can we use to know when the UF v... See more...
Hello folks, we have some linux machines with UF installed on that connect to our search head. We haven't access to those machines. There is some SPL query that can we use to know when the UF version on the machines has changed?  Thank you.
Here is my experience troubleshooting  Splunk data ingestion related issues. 1. Search for the top 3 issue in your environment.   index=_internal host=<indeser_host or HF_host> source="/opt/splunk... See more...
Here is my experience troubleshooting  Splunk data ingestion related issues. 1. Search for the top 3 issue in your environment.   index=_internal host=<indeser_host or HF_host> source="/opt/splunk/var/log/splunk/splunkd.log" log_level=WARN | top 3 component   2.  Address the top 1 issue and review.  Going to use an example component issue.  In my case it's the DateParserVerbose is the top 1 issue.  When we use the drill down to see the logs.  You notice that field that are necessary are not parsed by the TA or Splunk to narrow the issue.  Here is the SPL to do that using REX.   index=_internal host=<index_host or HF_host>source="/opt/splunk/var/log/splunk/splunkd.log" log_level=WARN component=DateParserVerbose | rex "| rex "\] - (?P<source_message>.+)source\S(?P<data_source>.+)\|host\S(?P<data_host>\w{5}\d{4}\S$m$\S$msk$\S$mask$\S$msk$)" ```Note: $m$, $msk$, $mask$ are masked value. You need to put your domain here   3.  Then you can see which source is causing what issue.  You can event break down the message using rex to figure the common cause and repeat step 1.   | stats values(data_source) as data_source by data_host source_message | top source_message   Happy Splunkin!  We wish we had this in the beginning but, as most Splunker are task with so many tasks not enough to troubleshoot ingestion related issues.  How do you other type of troubleshooting like filter events and sending things to null queue or blacklist?  Do you see any in the logs? 
Hi all, Can anyone recommend a way of allowing 'investigative' information to be added to an alert, such that it's stored in our Splunk Cloud instance? We use a 3rd party supplier who carries out t... See more...
Hi all, Can anyone recommend a way of allowing 'investigative' information to be added to an alert, such that it's stored in our Splunk Cloud instance? We use a 3rd party supplier who carries out triage on our Splunk alerts, and they have their own ticketing system as part of their SOAR infrastructure. That works well but it's not our system and if we decide to stop using that supplier we potentially lose all the information and comments added to alert tickets. We would therefore like to add our own comments within Splunk when an alert is triggered, so that the information is stored for future reference, e.g. to help an analyst investigating an alert if it triggers again in future, or if someone else in the organisation wants to modify that alert. I'm aware an alert is simply an action that takes place if the output of a query meets a certain condition, so I'm not necessarily asking how to add information to the alert object itself. I'm open to any suggestions, e.g. simple add-ons or Apps that act like a basic SOAR/incident management system. Thanks.
Hi, I'm trying to identify the users who updated which look file and what information they updated. I was planning to track such information using dashboards. Thank you for your assistance in adva... See more...
Hi, I'm trying to identify the users who updated which look file and what information they updated. I was planning to track such information using dashboards. Thank you for your assistance in advance. The one I tested is below since it only displays a limited collection of data such as user, app, and lookup file name. index=_internal "Lookup edited successfully" | table _time user namespace lookup_file  
We currently get email alerts from Splunk whenever they are scheduling any maintenance on our instances however we would like to add an additional email and we are not sure how this cold be done. If ... See more...
We currently get email alerts from Splunk whenever they are scheduling any maintenance on our instances however we would like to add an additional email and we are not sure how this cold be done. If anyone could point us to any documentation or give steps on how this can be accomplished that would be great. Thanks
Hi,   I want to convert Epoch time appearing in my events in a field but I want to convert it at index time so that when I search for events instead of    {"@timestamp":1663854197000,"event":... See more...
Hi,   I want to convert Epoch time appearing in my events in a field but I want to convert it at index time so that when I search for events instead of    {"@timestamp":1663854197000,"event":{"id":"101........................   I want to change it to {"@timestamp":human readable format,"event":{"id":"101........................ I know that splunk reads the epoch time and converts it to human readable format under the _time field but I want to transform the raw events to have human readable format. I am assuming I would need to do it on props.conf to do it at index time, maybe SEDCMD could do it I am not sure I just cant get down the right syntax for this I would really appreciate if anyone can help with this. Thank you in advance!
Hi team, I am from admin team i wanted to how many of indexes are empty and are not having data anymore in it so that i can remove those indexes and clean up my indexes.conf    Can we have a qu... See more...
Hi team, I am from admin team i wanted to how many of indexes are empty and are not having data anymore in it so that i can remove those indexes and clean up my indexes.conf    Can we have a query or a way to find this 
hai we are using multiple Heavy forwarders while doing any configuration in inputs.conf during logs collection doing manually in all heavy forwarders. is there anyway to update and push configura... See more...
hai we are using multiple Heavy forwarders while doing any configuration in inputs.conf during logs collection doing manually in all heavy forwarders. is there anyway to update and push configuration for all at once  we are using deployment server to manage universal forwarders/clients. how we can use deployment to manage HF also 
I am trying to create a dashboard that has a dropdown input eg: <input type="dropdown" token="HWStat" searchWhenChanged="true"> <label>HW Status</label> <choice value="*">All... See more...
I am trying to create a dashboard that has a dropdown input eg: <input type="dropdown" token="HWStat" searchWhenChanged="true"> <label>HW Status</label> <choice value="*">All</choice> <choice value="Installed">Installed</choice> <default>*</default> <initialValue>*</initialValue> </input>   This was the easier part.   Now when I go to use the token $HWStat$ it's just fine for passing the "Installed" option to the search.   However, the asterisk "*" option for all when passed only displays results with values.  I know there are a lot of "null" values in there as well that I would like the output to display.  Also there appears to be some with "blanks" as well. (I've tried to set all blank fields to null using fillnull, and some appear to be either a white space or blank).   basic search I was testing: ending with: | stats ... | search hw_stat_column=$HWStat$   What is the better option to pass other than "*" to capture everything?  I've scoured the internet, and this is not an easy thing to search for.  Thanks in advance!
Hello! I have a log file with the following pattern:         13:06:03 CRITICAL  [app] An error happened while processing message active/mastercard/event/secondpre... See more...
Hello! I have a log file with the following pattern:         13:06:03 CRITICAL  [app] An error happened while processing message active/mastercard/event/secondpresentmentcreateevent/v1/2022/08/30/afae9068-8dc2-5e3a-9e4a-83081925238f ["message" => "[{"requestid":"49120180-f64d-863d-f7f5-c2f58b180587","source":"SYSTEM","reasoncode":"INVALID_REQUEST","description":" [CreateCR2] usecase is not applicable in this context.","recoverable":false,"details":[{"name":"ErrorDetailCode","value":"100001"}]}]","status" => 400,"trace" => [["file" => "/var/www/drm-scheme/vendor/react/event-loop/src/Timer/Timers.php","line" => 101,"function" => "App\Command\{closure}","class" => "App\Command\AbstractQueueProcessor","type" => "->"],["file" => "/var/www/drm-scheme/vendor/react/event-loop/src/StreamSelectLoop.php","line" => 185,"function" => "tick","class" => "React\EventLoop\Timer\Timers","type" => "->"],["file" => "/var/www/drm-scheme/src/AppBundle/Command/AbstractQueueProcessor.php","line" => 311,"function" => "run","class" => "React\EventLoop\StreamSelectLoop","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Command/Command.php","line" => 255,"function" => "execute","class" => "App\Command\AbstractQueueProcessor","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 929,"function" => "run","class" => "Symfony\Component\Console\Command\Command","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/framework-bundle/Console/Application.php","line" => 96,"function" => "doRunCommand","class" => "Symfony\Component\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 264,"function" => "doRunCommand","class" => "Symfony\Bundle\FrameworkBundle\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/framework-bundle/Console/Application.php","line" => 82,"function" => "doRun","class" => "Symfony\Component\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 140,"function" => "doRun","class" => "Symfony\Bundle\FrameworkBundle\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/bin/console","line" => 42,"function" => "run","class" => "Symfony\Component\Console\Application","type" => "->"]],"line" => 261,"class" => "App\Command\AbstractQueueProcessor","request" => "active/mastercard/request/secondpresentmentrequest/v1/2022/08/29/204618304273/cb905322-b2ab-4742-8acd-a7915b9be744","caseId" => "204618304273"] ["uid" => "31473ed"]           But i need to understand how can i set up in the Settings -> Source types -> (sourcetype name create) to make this event highlighted. Here is how i tried to set up (but didn't work):  
Hi, can anybody help, please? I'm using classical forwarder to index regular CSV file. The time/date of the CSV logFile changes always if a new entry comes. Each event has TIME Attribute. If I choose... See more...
Hi, can anybody help, please? I'm using classical forwarder to index regular CSV file. The time/date of the CSV logFile changes always if a new entry comes. Each event has TIME Attribute. If I choose time interval TODAY there have been indexed 100 events. The indexed time _time is always the same (similar to the time of the first event). The time attribute of each event changes of course. Does anybody have an idea, where is the problem? If I restart the forwarder, the problem appears on the next day.
My rex search is returning all the rows instead of the one being searched. What am I doing wrong? index=cloudwatchlogs loggroup="/aws-glue/jobs/xxxxx/*" meta_region="us-east-1" meta_env="TEST" meta... See more...
My rex search is returning all the rows instead of the one being searched. What am I doing wrong? index=cloudwatchlogs loggroup="/aws-glue/jobs/xxxxx/*" meta_region="us-east-1" meta_env="TEST" meta_type="aws:jobs" | rex field="message.message" max_match=0 "Total rows from Raw Call meta:\s(?<msg1>\d+)\s" | rex field="message.message" max_match=0 "Total Meta rows written to S3 bucket:\s(?<msg2>\d+)\s" | rex field="message.message" max_match=0 "Total QCI Raw Data rows read from S3 bucket:\s(?<msg3>\d+)\s" | rex field="message.message" max_match=0 "Total root rows written to S3 bucket:\s(?<msg4>\d+)\s" Sample data - INFO:__main__:Total rows from Raw Call meta: 3995 INFO:__main__:Deleting duplicate rows INFO:__main__:Total rows before Deleting duplicate rows: 3995 INFO:__main__:Listing duplicates, if any INFO:__main__:Total Meta rows written to S3 bucket: 3995 INFO:__main__:Processing RAW QCI Data.
Hi   I have several search where I performed renaming. Some of them are done on fied which looks like xxx.yyy{}.aaa xxx.yyy{}.bbb zzz{}.ccc In the search I do | rename xxx.yyy{}.aaa as newna... See more...
Hi   I have several search where I performed renaming. Some of them are done on fied which looks like xxx.yyy{}.aaa xxx.yyy{}.bbb zzz{}.ccc In the search I do | rename xxx.yyy{}.aaa as newname1,      xxx.yyy{}.bbb as newname2,     zzz{}.ccc as newname3 I tried to implement it with field alias configuration but it's doesn't work   Is it possible ? I don't find any documentation about this specification   PS : my field alias works properly without curly braces
I am also facing similar issue. I have installed the configured the Commvault Splunk plugin available at: https://splunkbase.splunk.com/app/5718/  and I am trying to get respective data from Commvalu... See more...
I am also facing similar issue. I have installed the configured the Commvault Splunk plugin available at: https://splunkbase.splunk.com/app/5718/  and I am trying to get respective data from Commvalut version-11.24.48. I have checked the permissions for the user(not a service account) of Commcell it is same as per your shared screenshot still no luck.   Also I am using the port 81 in the format <webserver_url>:<IIS port number> for  setting up the Commcell page. This port is open from splunk to commvault but is not open from commvault to splunk Please help me understand the possible reasons for the issue. 1) connectivity issue could be du  
Hi Community! I am trying to find a good example of setting a background image to a classic dashboard.  This process seems very simple in Dashboard Studio but near impossible in Classic Dashboard. ... See more...
Hi Community! I am trying to find a good example of setting a background image to a classic dashboard.  This process seems very simple in Dashboard Studio but near impossible in Classic Dashboard.  Please can someone point me in the right direction.   Thanks