All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded agai... See more...
hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded again. how do i run the Index
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any ... See more...
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any solution for it? 
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using thi... See more...
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using this query :     index=* sourcetype=aws:metadata | eval aws_service=mvindex(split(source,":"),2) | rex field=aws_service "(?<aws_service>[^_]+)" | table aws_service source| dedup aws_service     Using this I will get result :  elasticache. But in case of "434531263412:us-west-2:nat_gateways" its just extracting nat. But it should be gateways. S Similarly in 434531263412:eu-central-1:application_load_balancers, its extracting application. I was thinking if we can check for the keyword and update the value. I want to add this in props.conf file so aws_service field gets created from source. Please can anyone of you help me how can I achieve this ? Regards, PNV
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to ... See more...
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to an index.  It always runs to completion with a "status=COMPLETED" but at times it finishes with an 'error_count > 0' and we notice that we don't get those log events added to the index for that run.  Where can I see that these errors are and why are they generated?
Greetings, I have just started using splunk and I was trying to montior logs from my files section, And I am getting the following errors while doing so, help me. I am using heavy forwarder for this.... See more...
Greetings, I have just started using splunk and I was trying to montior logs from my files section, And I am getting the following errors while doing so, help me. I am using heavy forwarder for this.   I have added my forwarder port to 192.168.196.51:9997 and also made reciever on port 9997. I dont know where I am making mistake. Please help me with this. Thanks and Regards.  
I need to create a dashboard panel merging two different search queries. I have below two queries: Kindly help on this request.   index=test_index source=/applications/test/*instance_abc* ("<--- T... See more...
I need to create a dashboard panel merging two different search queries. I have below two queries: Kindly help on this request.   index=test_index source=/applications/test/*instance_abc* ("<--- TRN:" OR "Priority" OR "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to") | rex field=_raw "Priority\=(?<Priority>[^\,]+)" | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "TEST\.RCV\.FROM\.(?<TestMQ>.*)\@" | stats count(eval(Priority=="Low")) as Low, count(eval(Priority=="Medium")) as Medium, count(eval(Priority=="High")) as High, values(TestMQ) as TestMQ by trn | stats sum(Low) as Low, sum(Medium) as Medium, sum(High) as High by TestMQ | addtotals fieldname="TotalCount" | sort by TotalCount desc     This gives me output as below: TestMQ | Low | Medium | High | TotalCount The 2nd query is below:     index=test_index source=/applications/test/*instance_abc* ("<--- TRN:" OR "Priority" OR "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to") | eval field=split(source,"/") | eval Instance=mvindex(field,4) | chart count(eval(searchmatch("from"))) as Testget count(eval(searchmatch("sent to"))) as Testput count(eval(searchmatch("AP sent to"))) as AP count(eval(searchmatch("AH sent to"))) as AH count(eval(searchmatch("MP sent to"))) as MP by Instance | eval Pending = Testget - (AP + AH) | sort Testget desc     This gives me output as below: Instance | Testget | Testput | AP | AH | MP | Pending I am looking for merging both the queries together and get the final output based on Pending volume for Low, Medium and High priority counts.   Select: Low, Medium, High (From the Dashboard dropdown) Output Expected: TestMQ| Low-Testget | Low-Testput | Low-AP | Low-AH | Low-MP | Low-Pending TestMQ | Medium-Testget | Medium-Testput | Medium-AP | Medium-AH | Medium-MP | Medium-Pending TestMQ | High-Testget | High-Testput | High-AP | High-AH | High-MP | High-Pending
I have a lookup like this  Name Status ExamID John Pass 123 Bob Pass 345 John Fail 234 Bob Pass 235 Smith Fail 231   My Events are having Name alone as the unique ... See more...
I have a lookup like this  Name Status ExamID John Pass 123 Bob Pass 345 John Fail 234 Bob Pass 235 Smith Fail 231   My Events are having Name alone as the unique identifier.   I wrote my query like this  index=userdata [ inputlookup userinfo.csv | fields Name]  | lookup userinfo.csv Name as Name OUTPUT Status as Status ExamID as Identifier  Via first subsearch I extracted the events only belong to names present in the table and then i tried to ouput the status and examid for those Names. On combination of these 3 in the event i need to evaluate fourth result.  John - Pass - 123 ->> In this if ExamID falls between 120 and 125 I need to print value for fourth field as "GOOD"  However while am printing output from lookup i got multivalues like this. Then i tried to do mvappend and that did not work correctly.  So how to do this correctly John Pass Fail 123 234
Hi Team, Good day! We have extracted the set of job names from the event using the below rex query. index=app_events_dwh2_de_uat _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\... See more...
Hi Team, Good day! We have extracted the set of job names from the event using the below rex query. index=app_events_dwh2_de_uat _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<Name>[^\\\]+).*?\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<State>ENDED OK).*?Timestamp\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+).*?execution_time_in_seconds\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | table "TIME", "Name", "State", "EXECUTION_TIME" | mvexpand TIME | dedup TIME After using the above query we have obtained the result in the table format like below. 20240417 21:13:23 CONTROL_M_REPORT ENDED OK 73.14 DWHEAP_FW_BHW ENDED OK 80.66 DWHEAP_FW_TALANX ENDED OK 80.18 DWHEAP_TALANX_LSP_FW_NODATA ENDED OK 3.25 SALES_EVENT_TRANSACTION_RDV ENDED OK 141.41   Is it possible to extract only the jobs with name consists of string NODATA from the above set of job names.  Below is the sample event for the above one. Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"CONTROL_M_REPORT\"","3":" \"status\": \"ENDED OK\"","4":" \"execution_time_in_seconds\": \"46.39\"","5":" \"Timestamp\": \"20240418 12:13:23\"","6":" }","7":" \"1\": {","8":" \"jobname\": \"DWHEAP_FW_AIMA_001\"","9":" \"status\": \"ENDED OK\"","10":" \"execution_time_in_seconds\": \"73.14\"","11":" \"Timestamp\": \"20240418 12:13:23\"","12":" }","13":" \"2\": {","14":" \"jobname\": \"DWHEAP_FW_BHW\"","15":" \"status\": \"ENDED OK\"","16":" \"execution_time_in_seconds\": \"71.19\"","17":" \"Timestamp\": \"20240418 12:13:23\"","18":" }","19":" \"3\": {","20":" \"jobname\": \"DWHEAP_FW_NODATA\"","21":" \"status\": \"ENDED OK\"","22":" \"execution_time_in_seconds\": \"80.63\"","23":" \"Timestamp\": \"20240418 12:13:23\"","24":" }","25":" \"4\": {","26":" \"jobname\": \"DWHEAP_FW_TALANX\"","27":" \"status\": \"ENDED OK\"","28":" \"execution_time_in_seconds\": \"80.20\"","29":" \"Timestamp\": \"20240418 12:13:23\"","30":" }","31":" \"5\": {","32":" \"jobname\": \"DWHEAP_FW_UC4_001\"","33":" \"status\": \"ENDED OK\"","34":" \"execution_time_in_seconds\": \"80.13\"","35":" \"Timestamp\": \"20240418 12:13:23\"","36":" }","37":" \"6\": {","38":" \"jobname\": \"DWHEAP_TALANX_LSP_FW_NODATA\"","39":" \"status\": \"ENDED NOTOK\"","40":" \"execution_time_in_seconds\": \"120.12\"","41":" \"Timestamp\": \"20240418 12:13:23\"","42":" }","43":" \"7\": {","44":" \"jobname\": \"RDV_INFRASTRUCTURE_DETAILS\"","45":" \"status\": \"ENDED OK\"","46":" \"execution_time_in_seconds\": \"81.16\"","47":" \"Timestamp\": \"20240418 12:13:23\"","48":" }","49":" \"8\": {","50":" \"jobname\": \"VIPASNEU_STG\"","51":" \"status\": \"ENDED OK\"","52":" \"execution_time_in_seconds\": \"45.04\"","53":" \"Timestamp\": \"20240418 12:13:23\"","54":" }","55":"}"}} Please look into this and kindly help us in extraction of the job which contains string NODATA from the above set of job names that has been extracted     
Hi All, I have a json event which has test cases and test case status and jenkins build number. There are many test cases in my events. I want to find if any of the test case is failing more than in... See more...
Hi All, I have a json event which has test cases and test case status and jenkins build number. There are many test cases in my events. I want to find if any of the test case is failing more than in 5 jenkins build number continuously. If any of the test cases is failing continuously in 5 builds i want to list such test cases. I have tried streamstats but not able to implement it fully. Does anyone have a better approach on this? please guide me on this.  
Hi Community, I have a question about regex and extraction I have _raw data in 2 rows/lines  (key and value) and I have to extract filed with key and value e.g :  row 1 : Test1 Test2 Test3 Test... See more...
Hi Community, I have a question about regex and extraction I have _raw data in 2 rows/lines  (key and value) and I have to extract filed with key and value e.g :  row 1 : Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test10 row 2:  101    102     103.    104.     105.   106.   107.   108.   109.    110      I have to extract only Test7 from above log and have print it's value in table  Pls help me  Regards, Moin
Hi All, I want to know anyone has Document for guiding How to integrate Splunk with Hana using PowerConnect? Or Configuration.
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
As per the above screenshot I am unable to view the Data summary tab in our Splunk search environment   
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. Howev... See more...
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. However after apply cluster-bundle, the indexes.conf is updated and both key values are replaced. The new set of keys not only replace the ones under [default] stanza, but also on each index stanza.  Where the new keys come from? Is it expected that keys be overwritten?
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracte... See more...
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracted the fields using regex: | rex field=message "level=info request.elapsed=(?<duration>.*) request.method=(?<method>.*) request.path=(?<path>.*) request_id=(?<request_id>.*) response.status=(?<statusCode>.*)" I want to manually build a new field called route based on the extracted field path. For example, for "path=/order/123456", I want to create new field "route=/order/{orderID}", so I can grouping by route not by path, the path contains real parameter which I cannot group on path.  How can I achieve this? Thanks.
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there... See more...
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there was no process consuming memory, except 3% of the machine agent, so we don't understand why is showing that 87.8% peak. Memory usage 87.8% But no process is consuming memory except for the machine agent. In other example happens the opposite, the memory usage in the server is 34.6%. Memory usage is 34.6% but the sum of the processes is way more than 100%  Sum of processes are using way more than 100%. Is this an interpretation problem of us or just an issue of the demo1? In other demos the sum is correct according to each process.  Thanks in advance. Hope you're having a great day.
So, I am running a job and I can see all my jobs and all the users jobs. However, the other users/power users cannot see my jobs that are running. What could cause that?    Some users cannot see my... See more...
So, I am running a job and I can see all my jobs and all the users jobs. However, the other users/power users cannot see my jobs that are running. What could cause that?    Some users cannot see my dashboard panels that includes my loadjobs because they don't have the permissions, when I have both read and write enabled for everyone, why could that be so?
I am getting a message that our splunk certificate is expired when I scan our systems. However, I cannot find the certicate anywhere in Windows Certificates. I also searched C:\Program Files\Splunk\e... See more...
I am getting a message that our splunk certificate is expired when I scan our systems. However, I cannot find the certicate anywhere in Windows Certificates. I also searched C:\Program Files\Splunk\etc\auth\mycerts and it is empy also the config in Checked \Splunk\etc\system\local and the webconfig doesnt have anything about a cert in there. How can I find this cert and where is it coming from? It's on our web port.
I am trying to set some token values when a dashboard loads or when the page is refreshed.   The documentation gives the following example: "defaults": { "dataSources": { "ds.search": { ... See more...
I am trying to set some token values when a dashboard loads or when the page is refreshed.   The documentation gives the following example: "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "tokenName": { "value": "1986" } } } }, This my code: "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "Slot1_TailNum": { "value": "false" } } } }, Which is not working.  I am using the "Interactions" Set tokens to set the value of the "Slot1_TailNum" token to something other than false to hide/show a table, which works fine.  However when reloading the Dashboard or refreshing the page the table is still displayed, it does not seem to be setting the value to false when loading. Any help would be greatly appreciated, I can run a zoom if required it you want/need to see.   Thanks David  
We want to add a host drop down in a dashboard  please find the host details below. dev1 appdev1host logdev1host cordev1host dev2  appdev2host logdev2host cordev2host dev3 appdev3hos... See more...
We want to add a host drop down in a dashboard  please find the host details below. dev1 appdev1host logdev1host cordev1host dev2  appdev2host logdev2host cordev2host dev3 appdev3host logdev3host cordev4host dev4 appdev4host logdev4host cordev4host sit1 appsit1host logsit1host corsit1host sit2 appsit2host logsit2host corsit2host sit3 appsit3host logsit3host corsit3host sit4 appsit4host logsit4host corsit4host drop down in dashboard should  have only 8 drop downs . For example: if i choose dev1 it should capture all the hosts mentioned for dev1(appdev1host, logdev1host,cordev1host) dev1 dev2 dev3 dev4 sit1 sit2 sit3 sit4