All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We recently had an issue with the Splunk scheduler wherein correlation searches weren't running (fixed by simply restarting the SHC members). Due to this, we've lost Notable events. I thought I coul... See more...
We recently had an issue with the Splunk scheduler wherein correlation searches weren't running (fixed by simply restarting the SHC members). Due to this, we've lost Notable events. I thought I could backfill these using the fill_summary_index.py script however it seems this may not be correct? I'm able to successfully kick off "back filling" correlation searches however I'm not seeing any Notable events added to the notable index. splunk cmd fill_summary_index.py -app <app> -name <search> -et <start epoch> -lt <end epoch> -dedup true -nolocal true -j 4 (for example) Can someone please confirm or deny this?
Hi We use the Splunk Cloud which gets logs from two HFs, which get logs from many UFs. A few of those UFs live on our Domain Controllers, which interact to some extend with the LDAP-API and get ... See more...
Hi We use the Splunk Cloud which gets logs from two HFs, which get logs from many UFs. A few of those UFs live on our Domain Controllers, which interact to some extend with the LDAP-API and get notifications, everytime an AD-Object changes (https://www.splunk.com/en_us/blog/tips-and-tricks/working-with-active-directory-on-splunk-universal-forwarders.html). What now happens is, every time LAPS changes the passwords, the Computer-Object gets updated, the UF gets ahold of those Passwords and we can see them plaintext in Splunk Cloud. After discovering this, i added this to props.conf (Splunk\etc\system\local) on the HF and restarted the HF : [ActiveDirectory] SEDCMD-pwdmask = s/(ms\-Mcs\-AdmPwd\=).+/########/g And since this hasn't worked, I tried this : [ActiveDirectory] SEDCMD-anonymiseLaps = 's/ms-Mcs-AdmPwd\=.*/ms-Mcs-AdmPwd=####!!!!!#####/g' (Source: https://www.databl.io/anonymise-your-clear-text-laps-passwords-in-splunk/ - this describes the problem pretty well.) ...which hasn't worked either. We still see those Passwords. Has anybody encountered similar problems and/or has hints or possible solutions? Thanks in advance. 
Hi All Good day! I have question in Windows Event Log monitor. I have configured and got the custom in application dashboard. Now i have to configure alert for particular event in custom. so can you... See more...
Hi All Good day! I have question in Windows Event Log monitor. I have configured and got the custom in application dashboard. Now i have to configure alert for particular event in custom. so can you guys share me step by step configuration on custom event to alert. which need to be choose for alert like creating Http request template or email template or email digest. I have tried with HTTP template and email digest. Http template create separate event but no mails are thrown when custom has data. Kindly help me with this. thanks
I need help with adding an asset input stanza for the lookup source. I created a sample lookup that has the proper headers and and set it up to share with the app however I can’t seem to get my looku... See more...
I need help with adding an asset input stanza for the lookup source. I created a sample lookup that has the proper headers and and set it up to share with the app however I can’t seem to get my lookup to show up within the source drop down on the asset lookup configuration page. Is there a certain way to get the lookup to show up under that dropdown? I am able to see the demo_assets.csv lookup but not the one I configured. I will upload a picture with the steps on the splunk doc where I am stuck.      
Hi experts, I am new to Splunk and came across this requirement at work. Requirement: I want to create a table showing numbers of 2 different versions of recaptcha being successfully and unsuccessf... See more...
Hi experts, I am new to Splunk and came across this requirement at work. Requirement: I want to create a table showing numbers of 2 different versions of recaptcha being successfully and unsuccessfully processed. Current Log info: Each event has a field named "msg" which contains many information, including wording like "Exception: recaptcha v 2 validation failure," "Exception: recaptcha v 3 validation failure", "Recaptcha v2 verification: successful", Recaptcha v3 verification: successful" based on different events. Tasks: How can I create a regex expression to count number of all exceptions and number of different types of exceptions? Same tasks for successful message, but I can figure it out if someone can help with the previous question? Thank you.  
Hi, Has anyone run into this? I am still on DB Connect 3.2 and have used dbxquery successfully for years. I am just trying to use dbxlookup but cannot get even a simple example to work?   | makere... See more...
Hi, Has anyone run into this? I am still on DB Connect 3.2 and have used dbxquery successfully for years. I am just trying to use dbxlookup but cannot get even a simple example to work?   | makeresults count=3 | streamstats count as id | dbxlookup connection="agena_ro" query="SELECT * FROM `agena_production`.`plans`" "id" AS "id" OUTPUT "description" AS "description"   I see this in the logs that looks suspicious? 2021-08-09 15:19:37.141 11088@prod3splunksearchl [main] INFO com.splunk.dbx.connector.logger.AuditLogger - operation= connection_name= stanza_name= state=success sql='SELECT "id", "description" FROM (SELECT * FROM `agena_production`.`plans`) dbxlookup WHERE "id" IN (?,?,?)'   Thanks. phunte
I am using the following query to retrieve events that I then display.  I would like to add another column that is the difference between the two columns. each log event has a field called app_eleme... See more...
I am using the following query to retrieve events that I then display.  I would like to add another column that is the difference between the two columns. each log event has a field called app_elements={eventtype='event1','widget'='apple'), for example The query: index="aws" level="info"  env="dev" earliest=-72h latest=-48h| spath input=app_elements | stats count by eventtype | eval Period="Before" | append [search index="aws" level="info" env="dev" earliest=-24h latest=now| spath input=app_elements | stats count by eventtype | eval Period="Now" ] | chart sum(count) over eventtype by Period The current result: eventtype                             Before                    Now event1                                       10                           20 event2                                       15                           12 event3                                       22                           20 event4                                       5                                8   The desired result: eventtype                             Before                    Now                Difference event1                                       10                           20                         10 event2                                       15                           12                          -3 event3                                       22                           20                          -2 event4                                       5                                8                            3
This seems to be an odd issue or at least I've been searching for the wrong thing.  My event sourcetype is json and they log and display just fine.  However, one of the fields of the event contains m... See more...
This seems to be an odd issue or at least I've been searching for the wrong thing.  My event sourcetype is json and they log and display just fine.  However, one of the fields of the event contains more JSON that is just being displayed like it is a string.  How can I extract the fields from this string of JSON?   Raw event:     {"Level":"Trace","MessageTemplate":"{\"Id\":\"000000000000000000000000\",\"HttpTracker\":{\"Method\":\"GET\",\"UserAgent\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0\",\"TimeOfCall\":\"2021-08-09T20:08:29.6311024Z\",\"StatusCode\":200,\"Url\":\"http://localhost:45705/Job/JobSelectionTableData?page=0&size=25&sort=col[4]=1&filter=filter&jobType=0\",\"Action\":\"JobSelectionTableData\",\"Controller\":\"Job\",\"Parameters\":{\"page\":\"0\",\"size\":\"25\",\"sort\":\"col[4]=1\",\"filter\":\"filter\",\"jobType\":\"UserCreated\"}},\"Notes\":\"\",\"UserId\":\"5b759c5cbb67fd479489f1ab\",\"Properties\":{\"ServerName\":\"LCS-AL-HNXX8Y2\",\"JobId\":\"000000000000000000000000\",\"TimeTaken\":\"1.998\"},\"HasBeenRead\":false,\"CallType\":1}","RenderedMessage":"{\"Id\":\"000000000000000000000000\",\"HttpTracker\":{\"Method\":\"GET\",\"UserAgent\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0\",\"TimeOfCall\":\"2021-08-09T20:08:29.6311024Z\",\"StatusCode\":200,\"Url\":\"http://localhost:45705/Job/JobSelectionTableData?page=0&size=25&sort=col[4]=1&filter=filter&jobType=0\",\"Action\":\"JobSelectionTableData\",\"Controller\":\"Job\",\"Parameters\":{\"page\":\"0\",\"size\":\"25\",\"sort\":\"col[4]=1\",\"filter\":\"filter\",\"jobType\":\"UserCreated\"}},\"Notes\":\"\",\"UserId\":\"5b759c5cbb67fd479489f1ab\",\"Properties\":{\"ServerName\":\"LCS-AL-HNXX8Y2\",\"JobId\":\"000000000000000000000000\",\"TimeTaken\":\"1.998\"},\"HasBeenRead\":false,\"CallType\":1}","Properties":{"host":"LCS-AL-HNXX8Y2","threadid":"6","logger":"TOPSS.UserLogger.ActionTrackerContext"}}       Splunk recognizes this as JSON and displays as: Notice the MessageTemplate field contains more JSON.  That is what I'm trying to extract fields from and coming up empty thus far. A few things I've tried that don't work:     MYSEARCH | spath output=Id path=MessageTemplate.Id           MYSEARCH | spath MessageTemplate       Any help would be much appreciated.  This type of extraction is very new to me!
I'm seeking to make a spunk timechart of values that match a certain filter: source="/var/log/bcore/ws_metric*" event="WsMetricConnectEventType.connect_end" duration_seconds < 60*60 | timechart p95(... See more...
I'm seeking to make a spunk timechart of values that match a certain filter: source="/var/log/bcore/ws_metric*" event="WsMetricConnectEventType.connect_end" duration_seconds < 60*60 | timechart p95(duration_seconds) span=5m  Unfortunately, I'm clearly getting values that are longer than 60*60=3600 seconds. Many of the values for p95(duration_seconds) are actually somewhere in the range of 397k seconds.  How can I actually filter the data going into timechart?
If a saved search in ES data model. Should I be giving user permission to edit to the search & permission to the edit the data models?
We can connect successfully with Salesforce TA and grab data from common objects, like Audit Trail. However, EventLogFile type can't log anything and throw those errors.   2021-08-09 20:23:01,8... See more...
We can connect successfully with Salesforce TA and grab data from common objects, like Audit Trail. However, EventLogFile type can't log anything and throw those errors.   2021-08-09 20:23:01,824 +0000 log_level=ERROR, pid=24066, tid=MainThread, file=engine_v2.py, func_name=start, code_line_no=57 | [stanza_name=QA_EVENTLOG] CloudConnectEngine encountered exception Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/engine_v2.py", line 52, in start for temp in result: File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/job.py", line 88, in run contexts = list(self._running_task.perform(self._context) or ()) File "/opt/splunk/etc/apps/Splunk_TA_salesforce/lib/cloudconnectlib/core/task.py", line 289, in perform raise CCESplitError cloudconnectlib.core.exceptions.CCESplitError 2021-08-09 20:23:01,822 +0000 log_level=ERROR, pid=24066, tid=MainThread, file=task.py, func_name=_send_request, code_line_no=505 | [stanza_name=QA_EVENTLOG] The response status=400 for request which url=MYCOMPANY--qa.my.salesforce.com/services/data/v51.0/query?q=SELECT%20Id%2CEventType%2CLogDate%20FROM%20EventLogFile%20WHERE%20LogDate%3E%3D2020-01-01T00%3A00%3A00.000z%20AND%20Interval%3D%27Daily%27%20ORDER%20BY%20LogDate%20LIMIT%201000 and method=GET and message=[{"message":"\nLogDate>=2020-01-01T00:00:00.000z AND Interval='Daily' ORDER BY LogDate\n ^\nERROR at Row:1:Column:91\nNo such column 'Interval' on entity 'EventLogFile'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.","errorCode":"INVALID_FIELD"}] We just checked and all permissions are corrected (View Event Log Files / View All Data e API Enabled).
Hello all, I was wondering if I could please get some suggestions on why Tomcat isn't honoring my pattern values. I am following the instructions here:  https://docs.splunk.com/Documentation/AddOns/... See more...
Hello all, I was wondering if I could please get some suggestions on why Tomcat isn't honoring my pattern values. I am following the instructions here:  https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields As recommended by Splunk documentation, we setup the following in className="org.apache.catalina.valves.AccessLogValve " in of server.xml prefix="localhost_access_log_splunk" suffix=".txt" pattern="%t, x_forwarded_for=?%{X-Forwarded-For}i?, remote_ip=?%a?,.... The filename and fields log as expected. The only issue is instead of quotation (") marks, I am just seeing question marks (i.e. ...x_forwarded_for=?-?, remote_ip=?1.2.3.1?, remote_host=?1.2.3.2?,..) Splunk Add-on for Tomcat: https://splunkbase.splunk.com/app/2911/  
I have network logs that show various network device communication that are in an index in Splunk.  I have another index that has information about the devices that I need to report on.  But I'm havi... See more...
I have network logs that show various network device communication that are in an index in Splunk.  I have another index that has information about the devices that I need to report on.  But I'm having issues because the network logs are summarizing the network activity and showing all the devices with the same activity, as seen below: How can I get the individual information about the devices and/or how can I enumerate the information above.  If I send to a table, the device_ids will be blank, even if there is only one device in the list.
Hi there, I have a csv lookup file consisting of sender email addresses.  I'd like to search the splunk logs for all the entries with these SenderAddresses over the last 90 days to determine what Fro... See more...
Hi there, I have a csv lookup file consisting of sender email addresses.  I'd like to search the splunk logs for all the entries with these SenderAddresses over the last 90 days to determine what FromIP they have.  What search syntax do I use? file has been uploaded to Splunk and is called AllSenders.csv.  it has heading email, flag...all the flag are set to 1 since I want to search them all.   In general, to search the logs for email i use:   index=app_messagetrace sourcetype=ms:o365:reporting:messagetrace Thanks in advance....let me know what other info you need to help 
Our network uses a PKI (client and server certificate) authentication system.  The Splunk administrators are not allowed to open the management port (8089) to allow API queries, so I have been trying... See more...
Our network uses a PKI (client and server certificate) authentication system.  The Splunk administrators are not allowed to open the management port (8089) to allow API queries, so I have been trying to use the webport to mimic the browser interaction to create a search job.  I use the Developer Tools in the browser to watch the API calls to get the session ID and cookies/tokens, and I pass them to the correct endpoints just like the browser.  (I'm using the requests python library.)   No matter what I do, each time I GET or POST I am redirected to a 'login?session_expired=1' endpoint, which then redirects me to the original endpoint I intended to reach.  This works fine for a GET since I reach the resource I was trying to get to.  With a POST (creating the search job) a redirection changes the POST to a GET - so I can only retrieve the status of existing jobs instead of creating a new job.   If it adds additional context, the paths are built like this (scoped to name space): When I'm trying to GET a job slot: httpx://the.splunk.domain/en-US/splunkd/__raw/servicesNS/myusername/search/saved/searches/_new   When I'm trying to POST a new job:   httpx://the.splunk.domain/en-US/splunkd/__raw/servicesNS/myusername/search/search/jobs   I pass it all of the headers I see in the browser request including the X-Splunk-Form-Key and Cookie field.  If I don't include those fields the connection is rejected, so I know it is accepting those for validity.  When I include the header {'X-Requested-With': 'XMLHttpRequest'} I get a 401 Denied error every time (but the browser is passing it).   I tried REPEATEDLY to use the code block feature but it failed every time.  Code block listed in my own reply below.   The request follows the redirection and ends up making a GET to the jobs endpoint, which returns existing jobs and NOT POSTing the new job.   I don't know what else it want's in order to accept the POST data without first redirecting me through the login endpoint.   I need to understand what a Splunk instance in our environment needs to accept an authenticated connection without redirecting me to a login endpoint first.  If anyone has experience with this server configuration and can help, I would really appreciate it.  Thank you!
I've been having a hard time trying to get a Splunk search that will give me a count of all records in my Lead object in Salesforce where OwnerId = Id of the queue I'm using to manage intake and crea... See more...
I've been having a hard time trying to get a Splunk search that will give me a count of all records in my Lead object in Salesforce where OwnerId = Id of the queue I'm using to manage intake and created date = Today, but every time I search our index I'm getting way more records than I should (last check was 513 in Splunk and 413 in SFDC Production). My query is below. Just to explain the current state of the query and what I've been trying, I've created a CleanNow that's just today's date in Year-Month-Day and a CleanCreatedDate converting the Salesforce CreatedDate to the same Year-Month-Day, and my last attempt to limit the search scope was a subselect to find Ids for records where the owner is not my queue and dump them. The added date columns on the table are just to "idiot check" and try to find why I'm getting a delta. index=sfdc sourcetype=sfdc:lead OwnerId="[Id of my queue]" | eval CleanNow=strftime(now(), "%Y-%m-%d") | eval CleanCreatedDate=strftime(strptime(CreatedDate,"%Y-%m-%d"),"%Y-%m-%d") | where CleanCreatedDate=CleanNow | search NOT [search sourcetype=sfdc:lead OwnerId!="[Id of my queue]" | fields Id] | table Id, Status, OwnerId, CleanNow, CleanCreatedDate, CreatedDate, LastModifiedDate Looking at what the Splunk query gives me and searching Prod with those Ids, I can see Splunk is returning things to me as New / owned by my queue that in Prod are actually Converted / assigned to a human. And not even ones that were just now converted, I mean ones that were converted hours ago. I took one of those Ids returned by Splunk as New / owned by the queue but Prod said was converted hours ago, and did a search  index=sfdc sourcetype=sfdc:lead Id="[Id of the delta record]" And I get three events. 1. Status is New and is owned by my queue (this is what I actually want to see in the return) 2. Status is New and it's owned by a human being (do not want to see) 3. Status is Converted and it's owned by a human being (do not want to see) That Id only appears once in the results from Splunk, and the fact that we had two other entries that don't match what I consider a "hit" means I should not have seen it at all, including the one time it logged as New / owned by the queue. Not entirely sure I'm explaining it right, but basically I need to find a way to recursively search the results to find any Ids that match the original query, but are in there a second (or third, or fourth...) time with a different owner, and drop them from the return.
Hello I have a query that gives me the data below: _time                                 | id                 | order_id   | job             | user_id ---------------------------------------------... See more...
Hello I have a query that gives me the data below: _time                                 | id                 | order_id   | job             | user_id ------------------------------------------------------------------------------------ 2021-06-08 17:00:00 | 2240905 | -                   | done         | 23 ------------------------------------------------------------------------------------ 2021-06-08 17:00:00 | 2240844 | -                   | done         | 23 ------------------------------------------------------------------------------------ 2021-06-08 12:00:00 | 2240905 | -                   | start          | 167 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240844 | -                   | start          | 102 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240905 | 1066899 | allocated | 23 ------------------------------------------------------------------------------------ 2021-06-15 09:00:00 | 2240844 | 1055788 | allocated | 23 for each id, i need to find job "start" to have user_id and _time, but i also need order_id, how can i do this? I need something like this: _time                                 | id                 | order_id   | job             | user_id ------------------------------------------------------------------------------------ 2021-06-08 12:00:00 | 2240905 | 1066899 | start          | 167 ------------------------------------------------------------------------------------ 2021-06-15 10:00:00 | 2240844 | 1055788 | start          | 102 ------------------------------------------------------------------------------------ Thanks
Hi all, I have a lookup and I'd like to filter based on tokenized value. The lookup dropdown also sets a different token based on selection. This would normally be a simple task, but I've been ask... See more...
Hi all, I have a lookup and I'd like to filter based on tokenized value. The lookup dropdown also sets a different token based on selection. This would normally be a simple task, but I've been asked to have the lookup pre-filtered based on who is using the app. Each item in the dropdown represents a different user.  The lookup: | inputlookup $tokLookup$ | fields field_description, field | dedup field,field_description field for label = field_description field for value = field The pseudo code of what I'd like to do is simple: | inputlookup $tokLookup$ | where field="$tokUserRole$" | fields field_description, field | dedup field,field_description Is this possible within the constraints, such that I'm only producing the single value from the lookup corresponding to the user?
Hi, I have events with below format, 08/09/202109:27:00 +0000, search_name=sre_slo_BE_module_priority_monthly, search_now=1628501220.000, info_max_time=1628501220.000, info_search_time=1628501221.6... See more...
Hi, I have events with below format, 08/09/202109:27:00 +0000, search_name=sre_slo_BE_module_priority_monthly, search_now=1628501220.000, info_max_time=1628501220.000, info_search_time=1628501221.635, Module=InvoiceManagement, Priority=P4, LastViolationMonth="Jul-2021", MissedCount=1, LastViolationp90ResponseTime (s)="30.44", Deviation (%)="1.5" I would like to plot a graph which would have Y axis with values as P1, P2, P3, P4 & x axis with Month. I tried using below query to plot a graph by assigning values to the Priorities. But again, the Y axis becomes based on the Values assigned to the Priority rather than Priority itself.   index=summary source=sre_slo_BE_module_priority_monthly Module="ControlCenter" | eval convert_epoch = strftime(_time,"%m-%d-%Y") | eval prevMonth=lower(strftime(relative_time(_time,"-1mon@d"),"%B")) | eval MonthYear = prevMonth + "-" + date_year | search convert_epoch!="08-01-2021" | eval PriorityValue = case(Priority="P1", 4, Priority="P2", 3, Priority="P3", 2, Priority="P4", 1) | stats values(PriorityValue) as PriorityValue by MonthYear, Priority   Below is the graph which I get. Instead of PriorityValue on the Y axis, I need Priority itself. Could someone please help me out here @kamlesh_vaghela @diogofgm  Is this something which you can assist with. Appreciate if you can help on this Thanks
Hi all, not sure if deployment architecture was the right place to put this question. I need some clarification regarding search restrictions. Context: The powers that be are looking into roles to... See more...
Hi all, not sure if deployment architecture was the right place to put this question. I need some clarification regarding search restrictions. Context: The powers that be are looking into roles to enforce data segregation on a single application to serve multiple clients. This is not my area of expertise.  Question: I'm having trouble figuring out the syntax of 'Search Restrictions' in the roles section of Splunk. For each role, we limit the indexes available to that client through settings->users&authentication->roles -->indexes. For now, included is checked for each index used. There are 6 capabilities inherited from a base class, which I can list if they are relevant to the question. Through testing, I've narrowed down the problem to the restrictions. Confining the search to the indexes works and we get all the data we need through (though it's not separated by client) When I hit 'search restrictions', I've tried several combinations of syntax to evaluate a field present in the indexes that dictates which client's data we're looking at. Call it client. This is a two digit alphanumeric.  example: The format in live data:  clientArray.CLIENT=6A in summary data: CLIENT=6A Looking through the docs, it should work correctly from what I can tell. I add in my field for both like so, using the search filter SPL generator: (CLIENT::6A) OR (clientArray.CLIENT::6A) The preview gives me: index=index1 OR index=index2 OR index=index3 OR index=index4 OR index=index5 | search (CLIENT::6A) OR (clientArray.CLIENT::6A) This does not allow any data through. If I use the operator CLIENT=6A in a basic search, I'm getting back the data I need. Of course '=' is not allowed in search restrictions. Any ideas on what I'm doing wrong here?