All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone. I need to create a metric or Health Rule, which does the following: Warning : 15% of calls response time >= 50 secs Critical: 30% of calls with response time >= 50 secs Critical: ... See more...
Hello everyone. I need to create a metric or Health Rule, which does the following: Warning : 15% of calls response time >= 50 secs Critical: 30% of calls with response time >= 50 secs Critical: 10% of calls with error. Is this possible with AppDynamics?? I'm trying with this formula: ({n_trx_rt}>=50000/{total_trx})*100 Where n_trx_rt = Average Response Time total_trx = Calls per minute This gives me a result, but I'm not sure if the operation is supported by AppDynamics.
You could start by integrating your Splunk with LDAP so that your dashboard searches the AD data using LDAP queries. This app should help: SA-ldapsearch - https://splunkbase.splunk.com/app/1151
Can you double-check that the configuration is correct on the deployment client? Using btool: $SPLUNKHOME$/bin/splunk btool deploymentclient list
Try modifying this CURL request to your needs (adjust the endpoint, search, and token) curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/v2/jobs/... See more...
Try modifying this CURL request to your needs (adjust the endpoint, search, and token) curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/v2/jobs/export -d search="search index=* | head 10 | table host"  
So far I created this Join   index="index" "mysearchtext" | rex field=message ", request_id: \\\"(?<request_id>[^\\\"]+)" | fields _time request_id | eval matchfield=request_id | join matchfield [ ... See more...
So far I created this Join   index="index" "mysearchtext" | rex field=message ", request_id: \\\"(?<request_id>[^\\\"]+)" | fields _time request_id | eval matchfield=request_id | join matchfield [ search index="index" | spath request.id | rename request.id as id | fields mynewfield | eval matchfield=id | table _time request_id mynewfield     Basically I want to join 2 logs where request_id = id . The join is working as expected but as you expect is not efficient. I'd like to replace it with a more efficient search leveraging the fact that the events of the subsearch where I extract the field "mynewfield" are indexed for sure after some milliseconds the main search (where I extract the field request_id) Another useful info is that the logs that matches "mysearchtext" are way less than the logs in the subsearch Here a sample of the data {"AAA": "XXX","CCC":"DDD","message":{"request":{ "id": "MY_REQUEST_ID"} } } {"AAA": "XXX","CCC":"DDD","message":"application logs in text format e.g. 2024/04/26 06:35:21 mysearchtext headers: [], client: clientip, server, host, request_id=\"MY_REQUEST_ID\" "} The first event contains the message field which is a json string --> we have thousands of this logs The second one are "alerts" and we have just a few of them, the format of the "message" field is plain text. Both contains the value MY_REQUEST_ID which is the field that I have to use to correlate both logs. The output should be a TABLE of ONLY the events with "mysearchtext" (the second event) with some additional fields coming from the second event. The events above are sorted by time (reverse order), the second event is happens just few milliseconds before the first one (basically the second one is just a log message of the same REST request of the first event. The first event is the REST request response sent to the customer)
Not just Splunk.  Python also obligatorily treat "\xHH" in double quotes as escape sequences and rejects this data as JSON.  Like Splunk, it doesn't do this with "\n" if they are in input. I've no... See more...
Not just Splunk.  Python also obligatorily treat "\xHH" in double quotes as escape sequences and rejects this data as JSON.  Like Splunk, it doesn't do this with "\n" if they are in input. I've no idea where those control characters (\n, \x etc.) are coming from. They are not in the data that the mainframe send to Splunk. Could you clarify the method you use to verify that \xHH are not in mainframe data?  What do you use to inspect that data?  Do you see newlines in places where "\n" shows in Splunk?  As @ITWhisperer says, Splunk doesn't have the habit of inserting characters into ingested data.  Meanwhile, mainframes use an IBM-specific character set (EBCDIC) internally.  So, when it sends data out, something has to perform conversion.  But most importantly, if you view data in mainframe terminal and do not see those characters, that's not proof that those are not in the data; even if you view data in an intermediary terminal emulator such as those on a Unix machine, those emulators can also interpret translated control characters according to IBM's definition.  After all, control characters are used to control visual effect in terminals and by definition invisible to terminal users of the native platform, and a terminal emulator is expected to interpret converted control characters according to their native functions. My hypothesis is that those control characters are present in data stream sent from mainframe.  The best solution is to either fix that on mainframe, or insert a pre-processor to escape/strip control characters. In the short term, instead of resorting to regex in a structured dataset, I recommend using regex to escape those control characters, then let Splunk's robust functions do its job.   | fields _raw | rex mode=sed "s/\\\\x/\\\\\\x/g" | spath   Using the sample data, my output is ACTION CONSOLE DATETIME JOBID JOBNAME MFSOURCETYPE MSGNUM MSGREQTYPE MSGTXT SYSLOGSYSTEMNAME SYSPLEX _raw INFORMATIONAL INTERNAL 2024-04-24 13:34:47.92 +0100 STC15694 RDSONLVP SYSLOG IEC147I   IEC147I 613-04,IFG0195B,RDSONLVP,RDSONLVP,IIII4004,449F,JE5207, RDS.VPLS.PDLY0001.PFDRL.U142530.E240220\x9C \x80\x80 A090 UKPPLX01 {"MFSOURCETYPE":"SYSLOG","DATETIME":"2024-04-24 13:34:47.92 +0100","SYSLOGSYSTEMNAME":"A090","JOBID":"STC15694","JOBNAME":"RDSONLVP","SYSPLEX":"UKPPLX01","CONSOLE":"INTERNAL","ACTION":"INFORMATIONAL","MSGNUM":"IEC147I","MSGTXT":"IEC147I 613-04,IFG0195B,RDSONLVP,RDSONLVP,IIII4004,449F,JE5207,\nRDS.VPLS.PDLY0001.PFDRL.U142530.E240220\\x9C\n \\x80\\x80","MSGREQTYPE":""} (Note all "\xHH" sequences becomes "\\xHH" in _raw.) This is an emulation you can play with and compare with real data   | makeresults | eval _raw = "{\"MFSOURCETYPE\":\"SYSLOG\",\"DATETIME\":\"2024-04-24 13:34:47.92 +0100\",\"SYSLOGSYSTEMNAME\":\"A090\",\"JOBID\":\"STC15694\",\"JOBNAME\":\"RDSONLVP\",\"SYSPLEX\":\"UKPPLX01\",\"CONSOLE\":\"INTERNAL\",\"ACTION\":\"INFORMATIONAL\",\"MSGNUM\":\"IEC147I\",\"MSGTXT\":\"IEC147I 613-04,IFG0195B,RDSONLVP,RDSONLVP,IIII4004,449F,JE5207,\\nRDS.VPLS.PDLY0001.PFDRL.U142530.E240220\\x9C\\n \\x80\\x80\",\"MSGREQTYPE\":\"\"} " ``` data emulation above ```  
Each component has its own authentication settings (in case of search head cluster they are either pushed from deployer to all members or configured in run-time and distributed among members). So it'... See more...
Each component has its own authentication settings (in case of search head cluster they are either pushed from deployer to all members or configured in run-time and distributed among members). So it's only natural that you can't authenticate to indexer using SH user. If you can authenticate on your indexer it means someone needlessly pushed LDAP configuration to indexer layer (users don't interact with indexers directly!).
Hello Experts, I'm trying to create a python script to run adhoc searches via a api request but the documentation has me opening webpages after webpages. I've created a token already. Can someone ple... See more...
Hello Experts, I'm trying to create a python script to run adhoc searches via a api request but the documentation has me opening webpages after webpages. I've created a token already. Can someone please help me with this task? Thank you in advance,Splunk Search
I recently upgraded Splunk and Security Essentials. After the upgrade I followed @Christoph_vW's instructions and I am not seeing any errors.
@yuanliu , pleas find my answers below: 1. What I meant was for you to run that query standalone, not embedded in a complex search.  The purpose is to directly confirm/demonstrate that your Splunk i... See more...
@yuanliu , pleas find my answers below: 1. What I meant was for you to run that query standalone, not embedded in a complex search.  The purpose is to directly confirm/demonstrate that your Splunk instance performs fillnull as designed.  Nevertheless, your results with my silly test still demonstrates that fillnull works perfectly in your Splunk. yes correct when i run the query makeresults i am getting the output same like you as shown below: ATM DMM Income Rej_app Rej_log< Rej_app Reject 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   2. Your search does not return events with the following strings:  "Letter published correctley to DMM subject", "Letter rejected due to: DOUBLE_KEY", and "Letter rejected due to: UNVALID_LOG"; but it does return events with "Letter rejected due to: UNVALID_DATA_APP". (You can verify this by, e.g., search without stats and observe; there are many other ways to verify.) Here i gave an example, but the issue is with all the 6 strings. like if i search data for last 15 mins,  if logs is present for a particular it showing the count, but if logs are not present its showing null. 3. makeresults in appendcols subsearch only fills 3 rows. (Do run it standalone so you understand what it does.) When i run it as  standalone also i am seeing same  3rows only ATM DMM Income Rej_app Rej_log< Rej_app Reject 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   4. Now that we have established that fillnull works correctly, let me point out that this latest illustrated output contains no "empty" cells in ATM (aka "Letter published correctley to ATM subject") and Rej_app (aka "Letter rejected due to: UNVALID_DATA_APP"), the only two columns where your present search actually returns results.  Can you reproduce the problem you described? (No appendcol-makeresults business.) Here i gave an example, but the issue is with all the 6 strings. when i select a particular time range  if logs are present then i see the count otherwise it is displaying null. 5. I also want to point out that your OP illustrated drastically different column names as your later comments.  This type of unexplained differences make volunteers' mind-reading a lot more difficult.  Always explain your dataset, desired results, logic between the two in plain language (preferably without SPL), attempted SPL and actual results, then explain how actual results differ from desired results if that's not painfully obvious - oftentimes it is not to outsiders.  If you need to change mock data/results from a previous message, immediately point out and explain those changes. I apologies  for that will make sure to provided uniform data 6. The biggest discrepancy I see in your case is that, it is impossible for countin any stats command (including chart and timechart) to give "empty" output.  So, there must be some other commands AFTER stats that gives bad output.  You need to first examine/exemplify output from chart, then scrutinize every command after that to find which one/ones. Sorry i did not get you, can u pls let me know the query 
@bowesmana  Is there a way to switch from chart to table?  The options given earlier seems to only apply to variations of charts.  Curious if there's options to go from chart to table.
@yuanliu wrote: Do you get all 0 from this? What I meant was for you to run that query standalone, not embedded in a complex search.  The purpose is to directly confirm/demonstrate that your Sp... See more...
@yuanliu wrote: Do you get all 0 from this? What I meant was for you to run that query standalone, not embedded in a complex search.  The purpose is to directly confirm/demonstrate that your Splunk instance performs fillnull as designed.  Nevertheless, your results with my silly test still demonstrates that fillnull works perfectly in your Splunk. Read appendcols to see why only DMM, Rej_log, and Reject have 0s, and why only three rows have zeros. But let me give some hints: Your search does not return events with the following strings:  "Letter published correctley to DMM subject", "Letter rejected due to: DOUBLE_KEY", and "Letter rejected due to: UNVALID_LOG"; but it does return events with "Letter rejected due to: UNVALID_DATA_APP". (You can verify this by, e.g., search without stats and observe; there are many other ways to verify.) makeresults in appendcols subsearch only fills 3 rows. (Do run it standalone so you understand what it does.) Now that we have established that fillnull works correctly, let me point out that this latest illustrated output contains no "empty" cells in ATM (aka "Letter published correctley to ATM subject") and Rej_app (aka "Letter rejected due to: UNVALID_DATA_APP"), the only two columns where your present search actually returns results.  Can you reproduce the problem you described? (No appendcol-makeresults business.) I also want to point out that your OP illustrated drastically different column names as your later comments.  This type of unexplained differences make volunteers' mind-reading a lot more difficult.  Always explain your dataset, desired results, logic between the two in plain language (preferably without SPL), attempted SPL and actual results, then explain how actual results differ from desired results if that's not painfully obvious - oftentimes it is not to outsiders.  If you need to change mock data/results from a previous message, immediately point out and explain those changes. The biggest discrepancy I see in your case is that, it is impossible for count in any stats command (including chart and timechart) to give "empty" output.  So, there must be some other commands AFTER stats that gives bad output.  You need to first examine/exemplify output from chart, then scrutinize every command after that to find which one/ones.
Hi, I have extracted fields manually in Splunk cloud, The regex works perfectly in the field extraction preview page but while seraching fields are not showing up. I have kept the permission to glob... See more...
Hi, I have extracted fields manually in Splunk cloud, The regex works perfectly in the field extraction preview page but while seraching fields are not showing up. I have kept the permission to global, searching in verbose mode,set the coverage to All fields. I am trying to extract 6 fields(all the regex are working in preview page)out of which only one field(IP address) is showing in search.I have debugged and refreshed the page bumped it as well. Funny part if i use default regex expression in Splunk instead of writing my own regex then fields pops in search. Also, I have observed couple of fields showing up and then disappearing while searching Sample data :  nCountry: United States\nPrevious Country   nCountry\:\s(?<country>.+?)\\\nPrevious\sCountry     I have referred almost all the splunk answers but none of the solution fixed my problem. I am intruiged to know why is it not working? Thanks in advance 
No. You can define a output group and load-balance your event between both indexers. They don't have to be cluster members.
Ok. We have no context. You're writing as if we were supposed to know what you are talking about. You're posting in a Splunk Enterprise section of this forum, which is meant for questions specific to... See more...
Ok. We have no context. You're writing as if we were supposed to know what you are talking about. You're posting in a Splunk Enterprise section of this forum, which is meant for questions specific to on-premise software functionality and issues. But you selected a specific add-on as a product you're referring to. In such case you should have posted in the 'All Apps and Add-ons' section. We do not have glass orbs and don't know what you mean
+1 to what @richgalloway wrote - the official requirements are a bit... imprecise here and noone really knows how to interpret them. From my personal experience, it means that: 1) All cluster membe... See more...
+1 to what @richgalloway wrote - the official requirements are a bit... imprecise here and noone really knows how to interpret them. From my personal experience, it means that: 1) All cluster members should be running on the same operating systems - 100% Linux cluster or 100% windows cluster 2) All member should run on the same architecture (I don't remember if there are 32-bit versions available anymore but back when they were it might have mattered so you mustn't mix 32-bit and 64-bit; and of course don't try to add to the mix any ARMs if/when they become available) 3) As long as the cluster members are properly set up on each respective OS they should work but it is a good practice to keep things homogenous - it saves you on maintenance and troubleshooting. Also Splunk Support can reject cases if you have mixed environment especially if an issue is present on one OS and not showing on another.
in the props.conf, the original_host extraction won't work for the majority of users  - EXTRACT-original_host = \d+-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[\+\-]\d{2}:\d{2}\s(?<original_host>\S+) origina... See more...
in the props.conf, the original_host extraction won't work for the majority of users  - EXTRACT-original_host = \d+-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[\+\-]\d{2}:\d{2}\s(?<original_host>\S+) original_host is I believe a crucial fiield, so all datamodels can work as expected
Honestly? I have no idea what you're talking about. Could you be more specific?
It's a bit vague what you're trying to do. You can't get two separate result sets from one search.
First you have to think what exactly components in your dashboard would search for. Then you have to check if you have data to search from. After that comes time to write searches which do search f... See more...
First you have to think what exactly components in your dashboard would search for. Then you have to check if you have data to search from. After that comes time to write searches which do search for those things. Last step is making the results from those searches into some visualizations or make them dynamic based on dashboard inputs contents. So where are you in this process?