All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hiya, I'm trying to use the Splunk REST API to update macros that I've recently had to move to live under a different app that isn't the default `search` app. Before when the macro lived in the `s... See more...
Hiya, I'm trying to use the Splunk REST API to update macros that I've recently had to move to live under a different app that isn't the default `search` app. Before when the macro lived in the `search` app I was able to make a POST request to    /servicesNS/<account>/search/admin/macros/<macroName>   And this worked:   elif search_or_macro == 'macros': url = '<ROOT>/servicesNS/<ACCOUNT>/search/admin/macros/{}'.format(macro_name) res = requests.post(url, headers=headers, data={'definition': r'{}'.format(macro_definition)})   However once I moved the macros to live under a new app, let's call it `my_new_app`, POST requests no longer work to update the macro. This is what I have currently:   elif search_or_macro == 'macros': url = '<ROOT>/servicesNS/nobody/my_new_app/admin/macros/{}'.format(macro_name) res = requests.post(url, headers=headers, data={'definition': r'{}'.format(macro_definition)})   I have tried replacing `nobody` with: admin the account that owns the macro However neither of these work. I used the following splunk command to verify that the endpoint does seem to exist:   | rest /servicesNS/<ACCOUNT>/my_new_app/admin/macros/<MACRO NAME> | search author=<ACCOUNT>   And when I run that I get the following `id`:   https://127.0.0.1:8089/servicesNS/nobody/my_new_app/admin/macros/<MACRO NAME>     I have also read through the REST API documentation here: https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTTUT/RESTbasicexamples https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing#Namespace https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing However none of these explicitly describe how to update macros, and all I can seem to find when googling are old posts from 2015-2019 that weren't applicable to what I am trying to achieve Any help here would greatly be appreciated, I feel like I'm missing something simple but can't find further documentation that applies to macros
I was following the documentation of splunk connect for syslog so that I could ingest syslog in Splunk Cloud setup. I cannot turn of SSL option in my HEC global settings. So I did not uncomment the ... See more...
I was following the documentation of splunk connect for syslog so that I could ingest syslog in Splunk Cloud setup. I cannot turn of SSL option in my HEC global settings. So I did not uncomment the below line I created the file /opt/sc4s/env_file with the contents. SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx #Uncomment the following line if using untrusted SSL certificates #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no I started my sc4s.service ( systemd service created by following the doc). I started to get exception Followed this for splunk cloud. When sc4s service is started I get error below curl: (60) SSL certificate problem: self-signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. SC4S_ENV_CHECK_HEC: Invalid Splunk HEC URL, invalid token, or other HEC connectivity issue index=main. sourcetype=sc4s:fallback Startup will continue to prevent data loss if this is a transient failure. If I uncomment the line, I don't see the exception anymore but I fail to get any message when I  search index=* sourcetype=sc4s:events "starting up" as suggested in the documentation. No sample data when I run echo “Hello SC4S” > /dev/udp/<SC4S_ip>/514 Please let me know what I am missing in the setup so that I can proceed forward
hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded agai... See more...
hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded again. how do i run the Index
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any ... See more...
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any solution for it? 
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using thi... See more...
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using this query :     index=* sourcetype=aws:metadata | eval aws_service=mvindex(split(source,":"),2) | rex field=aws_service "(?<aws_service>[^_]+)" | table aws_service source| dedup aws_service     Using this I will get result :  elasticache. But in case of "434531263412:us-west-2:nat_gateways" its just extracting nat. But it should be gateways. S Similarly in 434531263412:eu-central-1:application_load_balancers, its extracting application. I was thinking if we can check for the keyword and update the value. I want to add this in props.conf file so aws_service field gets created from source. Please can anyone of you help me how can I achieve this ? Regards, PNV
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to ... See more...
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to an index.  It always runs to completion with a "status=COMPLETED" but at times it finishes with an 'error_count > 0' and we notice that we don't get those log events added to the index for that run.  Where can I see that these errors are and why are they generated?
Greetings, I have just started using splunk and I was trying to montior logs from my files section, And I am getting the following errors while doing so, help me. I am using heavy forwarder for this.... See more...
Greetings, I have just started using splunk and I was trying to montior logs from my files section, And I am getting the following errors while doing so, help me. I am using heavy forwarder for this.   I have added my forwarder port to 192.168.196.51:9997 and also made reciever on port 9997. I dont know where I am making mistake. Please help me with this. Thanks and Regards.  
I need to create a dashboard panel merging two different search queries. I have below two queries: Kindly help on this request.   index=test_index source=/applications/test/*instance_abc* ("<--- T... See more...
I need to create a dashboard panel merging two different search queries. I have below two queries: Kindly help on this request.   index=test_index source=/applications/test/*instance_abc* ("<--- TRN:" OR "Priority" OR "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to") | rex field=_raw "Priority\=(?<Priority>[^\,]+)" | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "TEST\.RCV\.FROM\.(?<TestMQ>.*)\@" | stats count(eval(Priority=="Low")) as Low, count(eval(Priority=="Medium")) as Medium, count(eval(Priority=="High")) as High, values(TestMQ) as TestMQ by trn | stats sum(Low) as Low, sum(Medium) as Medium, sum(High) as High by TestMQ | addtotals fieldname="TotalCount" | sort by TotalCount desc     This gives me output as below: TestMQ | Low | Medium | High | TotalCount The 2nd query is below:     index=test_index source=/applications/test/*instance_abc* ("<--- TRN:" OR "Priority" OR "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to") | eval field=split(source,"/") | eval Instance=mvindex(field,4) | chart count(eval(searchmatch("from"))) as Testget count(eval(searchmatch("sent to"))) as Testput count(eval(searchmatch("AP sent to"))) as AP count(eval(searchmatch("AH sent to"))) as AH count(eval(searchmatch("MP sent to"))) as MP by Instance | eval Pending = Testget - (AP + AH) | sort Testget desc     This gives me output as below: Instance | Testget | Testput | AP | AH | MP | Pending I am looking for merging both the queries together and get the final output based on Pending volume for Low, Medium and High priority counts.   Select: Low, Medium, High (From the Dashboard dropdown) Output Expected: TestMQ| Low-Testget | Low-Testput | Low-AP | Low-AH | Low-MP | Low-Pending TestMQ | Medium-Testget | Medium-Testput | Medium-AP | Medium-AH | Medium-MP | Medium-Pending TestMQ | High-Testget | High-Testput | High-AP | High-AH | High-MP | High-Pending
I have a lookup like this  Name Status ExamID John Pass 123 Bob Pass 345 John Fail 234 Bob Pass 235 Smith Fail 231   My Events are having Name alone as the unique ... See more...
I have a lookup like this  Name Status ExamID John Pass 123 Bob Pass 345 John Fail 234 Bob Pass 235 Smith Fail 231   My Events are having Name alone as the unique identifier.   I wrote my query like this  index=userdata [ inputlookup userinfo.csv | fields Name]  | lookup userinfo.csv Name as Name OUTPUT Status as Status ExamID as Identifier  Via first subsearch I extracted the events only belong to names present in the table and then i tried to ouput the status and examid for those Names. On combination of these 3 in the event i need to evaluate fourth result.  John - Pass - 123 ->> In this if ExamID falls between 120 and 125 I need to print value for fourth field as "GOOD"  However while am printing output from lookup i got multivalues like this. Then i tried to do mvappend and that did not work correctly.  So how to do this correctly John Pass Fail 123 234
Hi Team, Good day! We have extracted the set of job names from the event using the below rex query. index=app_events_dwh2_de_uat _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\... See more...
Hi Team, Good day! We have extracted the set of job names from the event using the below rex query. index=app_events_dwh2_de_uat _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<Name>[^\\\]+).*?\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<State>ENDED OK).*?Timestamp\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+).*?execution_time_in_seconds\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | table "TIME", "Name", "State", "EXECUTION_TIME" | mvexpand TIME | dedup TIME After using the above query we have obtained the result in the table format like below. 20240417 21:13:23 CONTROL_M_REPORT ENDED OK 73.14 DWHEAP_FW_BHW ENDED OK 80.66 DWHEAP_FW_TALANX ENDED OK 80.18 DWHEAP_TALANX_LSP_FW_NODATA ENDED OK 3.25 SALES_EVENT_TRANSACTION_RDV ENDED OK 141.41   Is it possible to extract only the jobs with name consists of string NODATA from the above set of job names.  Below is the sample event for the above one. Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"CONTROL_M_REPORT\"","3":" \"status\": \"ENDED OK\"","4":" \"execution_time_in_seconds\": \"46.39\"","5":" \"Timestamp\": \"20240418 12:13:23\"","6":" }","7":" \"1\": {","8":" \"jobname\": \"DWHEAP_FW_AIMA_001\"","9":" \"status\": \"ENDED OK\"","10":" \"execution_time_in_seconds\": \"73.14\"","11":" \"Timestamp\": \"20240418 12:13:23\"","12":" }","13":" \"2\": {","14":" \"jobname\": \"DWHEAP_FW_BHW\"","15":" \"status\": \"ENDED OK\"","16":" \"execution_time_in_seconds\": \"71.19\"","17":" \"Timestamp\": \"20240418 12:13:23\"","18":" }","19":" \"3\": {","20":" \"jobname\": \"DWHEAP_FW_NODATA\"","21":" \"status\": \"ENDED OK\"","22":" \"execution_time_in_seconds\": \"80.63\"","23":" \"Timestamp\": \"20240418 12:13:23\"","24":" }","25":" \"4\": {","26":" \"jobname\": \"DWHEAP_FW_TALANX\"","27":" \"status\": \"ENDED OK\"","28":" \"execution_time_in_seconds\": \"80.20\"","29":" \"Timestamp\": \"20240418 12:13:23\"","30":" }","31":" \"5\": {","32":" \"jobname\": \"DWHEAP_FW_UC4_001\"","33":" \"status\": \"ENDED OK\"","34":" \"execution_time_in_seconds\": \"80.13\"","35":" \"Timestamp\": \"20240418 12:13:23\"","36":" }","37":" \"6\": {","38":" \"jobname\": \"DWHEAP_TALANX_LSP_FW_NODATA\"","39":" \"status\": \"ENDED NOTOK\"","40":" \"execution_time_in_seconds\": \"120.12\"","41":" \"Timestamp\": \"20240418 12:13:23\"","42":" }","43":" \"7\": {","44":" \"jobname\": \"RDV_INFRASTRUCTURE_DETAILS\"","45":" \"status\": \"ENDED OK\"","46":" \"execution_time_in_seconds\": \"81.16\"","47":" \"Timestamp\": \"20240418 12:13:23\"","48":" }","49":" \"8\": {","50":" \"jobname\": \"VIPASNEU_STG\"","51":" \"status\": \"ENDED OK\"","52":" \"execution_time_in_seconds\": \"45.04\"","53":" \"Timestamp\": \"20240418 12:13:23\"","54":" }","55":"}"}} Please look into this and kindly help us in extraction of the job which contains string NODATA from the above set of job names that has been extracted     
Hi All, I have a json event which has test cases and test case status and jenkins build number. There are many test cases in my events. I want to find if any of the test case is failing more than in... See more...
Hi All, I have a json event which has test cases and test case status and jenkins build number. There are many test cases in my events. I want to find if any of the test case is failing more than in 5 jenkins build number continuously. If any of the test cases is failing continuously in 5 builds i want to list such test cases. I have tried streamstats but not able to implement it fully. Does anyone have a better approach on this? please guide me on this.  
Hi Community, I have a question about regex and extraction I have _raw data in 2 rows/lines  (key and value) and I have to extract filed with key and value e.g :  row 1 : Test1 Test2 Test3 Test... See more...
Hi Community, I have a question about regex and extraction I have _raw data in 2 rows/lines  (key and value) and I have to extract filed with key and value e.g :  row 1 : Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test10 row 2:  101    102     103.    104.     105.   106.   107.   108.   109.    110      I have to extract only Test7 from above log and have print it's value in table  Pls help me  Regards, Moin
Hi All, I want to know anyone has Document for guiding How to integrate Splunk with Hana using PowerConnect? Or Configuration.
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
As per the above screenshot I am unable to view the Data summary tab in our Splunk search environment   
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. Howev... See more...
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. However after apply cluster-bundle, the indexes.conf is updated and both key values are replaced. The new set of keys not only replace the ones under [default] stanza, but also on each index stanza.  Where the new keys come from? Is it expected that keys be overwritten?
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracte... See more...
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracted the fields using regex: | rex field=message "level=info request.elapsed=(?<duration>.*) request.method=(?<method>.*) request.path=(?<path>.*) request_id=(?<request_id>.*) response.status=(?<statusCode>.*)" I want to manually build a new field called route based on the extracted field path. For example, for "path=/order/123456", I want to create new field "route=/order/{orderID}", so I can grouping by route not by path, the path contains real parameter which I cannot group on path.  How can I achieve this? Thanks.
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there... See more...
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there was no process consuming memory, except 3% of the machine agent, so we don't understand why is showing that 87.8% peak. Memory usage 87.8% But no process is consuming memory except for the machine agent. In other example happens the opposite, the memory usage in the server is 34.6%. Memory usage is 34.6% but the sum of the processes is way more than 100%  Sum of processes are using way more than 100%. Is this an interpretation problem of us or just an issue of the demo1? In other demos the sum is correct according to each process.  Thanks in advance. Hope you're having a great day.
So, I am running a job and I can see all my jobs and all the users jobs. However, the other users/power users cannot see my jobs that are running. What could cause that?    Some users cannot see my... See more...
So, I am running a job and I can see all my jobs and all the users jobs. However, the other users/power users cannot see my jobs that are running. What could cause that?    Some users cannot see my dashboard panels that includes my loadjobs because they don't have the permissions, when I have both read and write enabled for everyone, why could that be so?
I am getting a message that our splunk certificate is expired when I scan our systems. However, I cannot find the certicate anywhere in Windows Certificates. I also searched C:\Program Files\Splunk\e... See more...
I am getting a message that our splunk certificate is expired when I scan our systems. However, I cannot find the certicate anywhere in Windows Certificates. I also searched C:\Program Files\Splunk\etc\auth\mycerts and it is empy also the config in Checked \Splunk\etc\system\local and the webconfig doesnt have anything about a cert in there. How can I find this cert and where is it coming from? It's on our web port.