All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I'm using this guide for Splunk MINT and I'm using the SAP SDK also https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Requirementsandinstallation And I have done the following  ... See more...
Hi  I'm using this guide for Splunk MINT and I'm using the SAP SDK also https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Requirementsandinstallation And I have done the following  - Downloading the sdk, and adding the framework and other required frameworks  - Made the setting in the app, but not configured server side for symbolication  - Init the Mint SDK with the API_KEY from my account  But my app crash with the following error in the console: Main Thread Checker: UI API called on a background thread: -[UIApplication keyWindow] PID: 69771, TID: 5607271, Thread name: (none), Queue name: com.splunk.MINT.SequentialOperations, QoS: 0 Backtrace: 4   My Time QA                          0x000000010778d259 -[ScreenMonitorManager init] + 131 5   My Time QA                          0x000000010778d1b0 __38+[ScreenMonitorManager sharedInstance]_block_invoke + 38 6   libdispatch.dylib                   0x0000000111fafdb5 _dispatch_client_callout + 8 7   libdispatch.dylib                   0x0000000111fb183c _dispatch_once_callout + 66 8   My Time QA                          0x000000010778d188 +[ScreenMonitorManager sharedInstance] + 102 9   My Time QA                          0x000000010776dc99 -[DataFixture appendBaseValues] + 1316 10  My Time QA                          0x000000010776d76b -[DataFixture init] + 64 11  My Time QA                          0x0000000107796d71 -[EventDataFixture init] + 46 12  My Time QA                          0x000000010776e9ce +[PingDataFixture getInstanceWithDeviceStatus:] + 41 13  My Time QA                          0x000000010779f5a5 -[MintRequestJsonSerializer serializeEventToJsonForPingWithDeviceStatus:] + 70 14  My Time QA                          0x0000000107775783 __46-[MintRequestWorker sendPing:completionBlock:]_block_invoke + 520 15  libdispatch.dylib                   0x0000000111faed7f _dispatch_call_block_and_release + 12 16  libdispatch.dylib                   0x0000000111fafdb5 _dispatch_client_callout + 8 17  libdispatch.dylib                   0x0000000111fb7225 _dispatch_lane_serial_drain + 778 18  libdispatch.dylib                   0x0000000111fb7e9c _dispatch_lane_invoke + 425 19  libdispatch.dylib                   0x0000000111fc1ea3 _dispatch_workloop_worker_thread + 733 20  libsystem_pthread.dylib             0x0000000112466a3d _pthread_wqthread + 290 21  libsystem_pthread.dylib             0x0000000112465b77 start_wqthread + 15 Best Regards Klaus
Hello Everyone, Is there a way to utilize the new fields extracted from logs that Splunk intakes and use in the alert action log event side and make the alert event log more dynamic? sample splun... See more...
Hello Everyone, Is there a way to utilize the new fields extracted from logs that Splunk intakes and use in the alert action log event side and make the alert event log more dynamic? sample splunk intake log :- {\"event_type\":\"FAILED_LOGIN\",\"event_id\":\"f0836a4a-9e4a-4914-b52c-010ecb0916f8\",\"type\":\"event\",\"created_at\":\"2020-11-13T21:30:09+05:30\",\"created_by\":{\"login\":\"\",\"type\":\"user\",\"id\":\"2\",\"name\":\"Unknown user\"},\"source\":{\"login\":\"rahulmishra1329@gmail.com\",\"type\":\"user\",\"id\":\"14044224420\",\"name\":\"Rahul Mishra\"},\"session_id\":null,\"additional_details\":null,\"action_by\":null,\"ip_address\":\"117.211.192.31\"} new field extracted :- event_type
Hi, I'm trying to configure a time-based lookup (temporal lookup) but it doesn't seem to be working as expected. 1) The lookup definitions fields are: time, context, tag::timebased   time,context... See more...
Hi, I'm trying to configure a time-based lookup (temporal lookup) but it doesn't seem to be working as expected. 1) The lookup definitions fields are: time, context, tag::timebased   time,context,tag::timebased 2020-11-18,eft,high 2020-11-11,eft,high 2020-11-04,eft,high 2020-10-28,eft,high 2020-10-21,eft,high   2) The The transforms.conf is on the SH   [timebasedlookup] time_field = time time_format = %Y-%m-%d min_matches = 1 max_matches = 10 default_match = default min_offset_secs = 0 max_offset_secs = 86400 collection = timebasedlookup external_type = kvstore fields_list = _key, time, context, tag::timebased     3) When i run a search to index the results are OK ("high" in tag::timebased) 4) But when i run search to datamodel (tstats) the results are NOK ("default" in tag::timebased) The same _time in index query and tstats query return different results.
i'm trying to convert values in column to fields names, But not able to achieve. table is like  ENV       LABEL          APP PR1       labelp1       APP1 PR1      labelp11     APP2 PR2      labe... See more...
i'm trying to convert values in column to fields names, But not able to achieve. table is like  ENV       LABEL          APP PR1       labelp1       APP1 PR1      labelp11     APP2 PR2      labelp2       APP1 PR2      labelp22    APP2   i'm trying to achieve PR1                    PR2           APP labelp1          labelp2        APP1 labelp11       labelp22     APP2   Can any one help on this      
Hello team, My search string is as below:  index=qrp STAGE IN ("*_RAW", T_FEED_MESSAGES) | stats sum(TRADES) as "TradeCount" by ODS_SRC_SYSTEM_CODE        And the result screenshot is above... See more...
Hello team, My search string is as below:  index=qrp STAGE IN ("*_RAW", T_FEED_MESSAGES) | stats sum(TRADES) as "TradeCount" by ODS_SRC_SYSTEM_CODE        And the result screenshot is above. The AR1, BE1 ect are source system codes and the numerical values for each source system in the rows are the aggregate trade counts for respective source system at the time span starting from 00:00:00 hours till 05:00:00 hours. However for source systems like BE2 and MA1 the count doesn't alter all through the day and is always 1. Now when I want to custom trigger a notification alert using this search string when threshold value of trade counts for each individual source system is less than 10 at 08:00:00 then by default always BE2 and MA1 comes up in alert.  Hence if I only want to exclude these two source system and take rest into consideration while setting up my custom trigger notification. How to achieve this?  Kindly help me with your valuable inputs.
Hi all, I have been trying to create a search which compares results from an index with results from an ldap search. The goal is to check if a user is not one of the groups.  For now I have this qu... See more...
Hi all, I have been trying to create a search which compares results from an index with results from an ldap search. The goal is to check if a user is not one of the groups.  For now I have this query.    index="summary_wineventlog" cn=Group1 OR cn=Group1 OR cn=Group3 | append [ | ldapsearch domain="default" search="(&(objectClass=user))" attrs="sAMAccountName" | rename sAMAccountName AS user | fields user] | regex user!="^([a-zA-Z0-9_\.-]+)\$$" | rex field=member_name "(?<username>\S+)+" | eval result=if(match(username, user),"Contained","Not Contained")   The eval function only shows "Not contained".  My field member_name contains every user delimited with a white space. The weird thing is that the field username only shows every first username of the field member_name. So that field would look like this.    user1 user2 user3 user4 user5 user6 user7 user8   I also have a lookup which contains a field with the usernames but I can't add it, every time i tried it gave me an error. This was the query that I tried for that.   index="summary_wineventlog" cn=Group1 OR cn=Group2 OR cn=Group3 [| inputlookup account_status_tracker | fields user] | regex user!="^([a-zA-Z0-9_\.-]+)\$$" | rex field=member_name "(?<username>\S+)+" | eval result=if(match(member_name, user),"Contained","Not Contained")   Does someone know how I could check if a user is not in one of the 3 groups with one of the two searches above?  Thanks  Sasquatchatmars
If I let my licence expire 1. Will I be able to access the GUI and login? 2. If I can login, will I still be able to search historic data that has already been processed? If not how can I search h... See more...
If I let my licence expire 1. Will I be able to access the GUI and login? 2. If I can login, will I still be able to search historic data that has already been processed? If not how can I search historic data?   Thanks
Hi everybody, According to the official documentation the standard form for the HEC URI in self-service Splunk Cloud is as follows: <protocol>://input-<host>:<port>/<endpoint> However it fails a... See more...
Hi everybody, According to the official documentation the standard form for the HEC URI in self-service Splunk Cloud is as follows: <protocol>://input-<host>:<port>/<endpoint> However it fails and the form that does work is  <protocol>://<host>:<port>/<endpoint> I am on Splunk Cloud self-managed (trial). Has anyone had a similar experience? Is it possible that the Splunk team has made some changes without updating the documentation? Thank you!
Hi,   Splunk now authenticates to the mail server with user/pass. Is it possible to configure splunk smtp authentication via API Key? I check Settings --> Server Settings --> Email settings -->... See more...
Hi,   Splunk now authenticates to the mail server with user/pass. Is it possible to configure splunk smtp authentication via API Key? I check Settings --> Server Settings --> Email settings --> Mail Server Settings and the only option I see is user/pass authentication.   Thanks in advance, Kind regards
I have just acquired the Event Timeline Visualization application and loaded it on my Deployer and pushed it out to my search heads.  Unfortunately not all the function is showing.  Do I need to load... See more...
I have just acquired the Event Timeline Visualization application and loaded it on my Deployer and pushed it out to my search heads.  Unfortunately not all the function is showing.  Do I need to load this individually on all the search heads?  I'm at a loss on how to get this out to them via the normal push from the Deployer to the search head cluster.  
HI, me. Am trying to do analysis of stacktraces in splunk for our RDMS. Essentially we can extract the spid for each stacktrace with the below, however we need to extract the full stack dump for the... See more...
HI, me. Am trying to do analysis of stacktraces in splunk for our RDMS. Essentially we can extract the spid for each stacktrace with the below, however we need to extract the full stack dump for the spid also around the same time frame. We have tried multiple avenues however it looks like the limiter is the line breaks in splunk which limited the ability to look back to retrieve the full trace. The only way to identify a stack trace is "end of stack trace". The events can be corelated by the spid which is identified on each line between :0000:<SPID>:Date stamp. It is worth noting that the stack dumps have no consistent header, the only way to correlate is by the spid number.   Is there a way of doing a sub query search to look back in the same time frame when a "end of stack trace event" happens and look back on the same time frame for the spid to get back the full stack dump ? Example of trace below.   Query to retrieve the spids: index=<myindex> source="<loglocation>/errorlog_*" AND NOT source="<loglocation>/*RSM*" AND NOT source="<loglocation>/*DEV*" AND NOT source="<loglocation>/*_BACKUP" AND "end of stack trace" | rex "0000:+(?<spid>[^:]+):" | chart count by spid Stack Dump: 00:0025:00000:06378:2020/12/01 14:07:28.26 server Buffer Information: Buf addr = MASKEd Mass addr = MASKED Buf pageno = MASKED, Mass pageno = MASKED, dbid = MASKED 00:0025:00000:06378:2020/12/01 14:07:28.26 server Buf virtpg = MASKED, Mass virtpg = MASKED Buf stat = MASKED, Mass stat = MASKED Mass keep = 1, Mass awaited = 0 00:0025:00000:06378:2020/12/01 14:07:28.26 server Page Information from first read attempt: Page read from disk ppageno = MASKEDpptnid = MASKEDpindid = 2 pnextpg = 178748, pprevpg = 178746 plevel = 0, pstat = 0x82 pts_hi = 0, pts_lo = MASKED 00:0025:00000:06378:2020/12/01 14:07:28.26 server Page Information from second read attempt: Page read from disk ppageno = MASKED, SKED8747, pptnid = MASKED pindid = 2 pnextpg = 178748, pprevpg = 178746 plevel = 0, pstat = 0x82 pts_hi = 0, pts_lo = MASKED 00:0025:00000:06378:2020/12/01 14:07:28.26 server SDES Information: dbid = MASKED, objid = MASKED , sptnid = MASKED scur.pageid = MASKED sstat = MASKED, sstat2 = MASKED suid = MASKED, cacheid = MASKED 00:0025:00000:06378:2020/12/01 14:07:28.26 server SDES Information: scur = MASKED, 0) physical RID sstat3 = MASKED, sstat4 = MASKED, sstat5 = 0x0, sstat6 = 0x0, sstat7 = 0x0 00:0025:00000:06378:2020/12/01 14:07:28.26 server DES Information: name = MASKED type = 'U ' objuserstat = MASKED, objsysstat = MASKED, objsysstat2 = MASKED, objsysstat3 = 0x0, objsysstat4 = 0x0, objsysstat5 = 0x0 00:0025:00000:06378:2020/12/01 14:07:28.26 server DES properties: ( MASKED, MASKED) 00:0025:00000:06378:2020/12/01 14:07:28.26 server PSS Information: pstat = MASKED, pcurdb = MASKED, pspid = 6378 p2stat = MASKED, p3stat = MASKED, plasterror = 0, preverror = MASKED, pattention = 0 00:0025:00000:06378:2020/12/01 14:07:28.26 server PSS Information: p4stat = MASKED, p5stat = MASKED, p6stat = MASKED, p7stat = MASKED, p8stat = MASKED, pcurcmd = MASKED('FETCH CURSOR'), pcmderrs = 0x0 00:0025:00000:06378:2020/12/01 14:07:28.26 server End diagnostics for read failure: 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED pcstkwalk+0x482() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED ucstkgentrace+0x20f() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED ucbacktrace+0x54() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED wrongpage__print_diagnostic+0x7d0() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED wrongpage+0xb88() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED getpage_with_validation+0x2175() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED apl_getnext+0x8e0() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED getnext+0x228() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED LeScanOp::_LeOpNext(ExeCtxt&)+0x106() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED LeEmitSndOp::_LeOpNext(ExeCtxt&)+0x1be() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED LePlanNext+0xea() 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel [Handler pc: MASKED le_execerr installed by the following function:-] 00:0025:00000:06378:2020/12/01 14:07:28.26 kernel pc: MASKED exec_lava+0x688() 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel pc: MASKED curs_fetch+0xf2() 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel pc: MASKED s_execute+0x32f2() 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel [Handler pc: MASKED hdl_stack installed by the following function:-] 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel [Handler pc: MASKED s_handle installed by the following function:-] 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel pc: MASKED sequencer+0xcb1() 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel pc: MASKED tdsrecv_language+0x1df() 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel [Handler pc: MASKED ut_handle installed by the following function:-] 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel pc: MASKED conn_hdlr MASKED () 00:0025:00000:06378:2020/12/01 14:07:28.27 kernel end of stack trace, spid 6378, kpid MASKED, suid MASKED 00:0025:00000:06378:2020/12/01 14:07:28.27 server Error: MASKED, Severity: MASKED, State: MASKED
I know how to create/edit the navigation menu and create collections however,  I would like to create a "Tools" collection then is displayed on every current and any new app created.  Is there a way ... See more...
I know how to create/edit the navigation menu and create collections however,  I would like to create a "Tools" collection then is displayed on every current and any new app created.  Is there a way to create this once and have it displayed by default in every App so that I dont have to go to each one and add this collection specifically?
Hello all, is it possible to call Splunk RestAPI with request in JSON. I am trying in SOAP UI software, media Type = application/json And when request is inputed as string: search=search index... See more...
Hello all, is it possible to call Splunk RestAPI with request in JSON. I am trying in SOAP UI software, media Type = application/json And when request is inputed as string: search=search index=myindex |head 5 I get valid response But when I try request in json format {"search": "search index=myindex |head 5"} I get response: {"messages": [{ "type": "FATAL", "text": "Empty search." }]} Tried also following requests: {"search": "index=myindex |head 5"} {"search": search index=myindex |head 5} {"body": {"search": "search index=myindex |head 5"}} Thanks
Hi, How to add test and production servers to a single dashboard in AppDynamics? ^ Edited by @Ryan.Paredez to improve the title. Please be sure to do your best to write out a clear and concise ... See more...
Hi, How to add test and production servers to a single dashboard in AppDynamics? ^ Edited by @Ryan.Paredez to improve the title. Please be sure to do your best to write out a clear and concise title when creating your discussion posts.
Hi guys, I'm having an issue while ingesting records from Salesforce. The connector retrieves only up 2000 records. I've configured both "Order By" and "Query Start Date Parameters" = 10000 but it ... See more...
Hi guys, I'm having an issue while ingesting records from Salesforce. The connector retrieves only up 2000 records. I've configured both "Order By" and "Query Start Date Parameters" = 10000 but it doesn't work. It doesn't handle the pagination of the records if the query returns more than 2000 items. I got the issue with any kind of data (i.e. Account) or events. Which configuration do I need to perform to be sure to retrieves all records?   Thanks
Hi, I know that EUM data is retained for 14 days, but want to know what happens with data older than 14 days for analytics/EUM data. Is it lost or any option to obtain back on UI. This is speci... See more...
Hi, I know that EUM data is retained for 14 days, but want to know what happens with data older than 14 days for analytics/EUM data. Is it lost or any option to obtain back on UI. This is specific to SaaS Controller. ^ Edited by @Ryan.Paredez to improve the clarity of the title.
we want 1 alert if something happens more than 1 time in that hour. But if it happens multiple times we want to see all those events also in the email. And we only want 1 alert in an hour. alter typ... See more...
we want 1 alert if something happens more than 1 time in that hour. But if it happens multiple times we want to see all those events also in the email. And we only want 1 alert in an hour. alter type: real time expires: 24 hours Trigger alert when: number of results is greater than 0 in 1 hours Trigger: Once Trottle: yes Supress triggering for: 1 hours  
Hello, I have configured MIDC(Method Invocation Data Collectors) on an application, to collect the status of orders, but the value is in numbers, how can I map numbers to string to create a meanin... See more...
Hello, I have configured MIDC(Method Invocation Data Collectors) on an application, to collect the status of orders, but the value is in numbers, how can I map numbers to string to create a meaningful dashboard. for example: 1 ==> "Pending" 2==> "Approved" 3==> "Rejected"
Hi, I have a widget created which lists hundreds of rows from an analytics search query I created. Is there a possibility to add a search box to quickly filter any information from these list of row... See more...
Hi, I have a widget created which lists hundreds of rows from an analytics search query I created. Is there a possibility to add a search box to quickly filter any information from these list of rows ? Regards Rohit
Hi Team, I need a help to extract all the fields in the Wineventlog post the message information in the log. And all the data are getting delimited based on "=". Sample Event: 10/26/2020 04:44:22 ... See more...
Hi Team, I need a help to extract all the fields in the Wineventlog post the message information in the log. And all the data are getting delimited based on "=". Sample Event: 10/26/2020 04:44:22 PM LogName=xyz SourceName=abc EventCode=ddd EventType=d Type=Warning ComputerName=xyz.abc.com User=NOT_TRANSLATED Sid=x-d-d-dd SidType=d TaskCategory=xxxx OpCode=xxxx RecordNumber=dddd Keywords=xxxxxxx Message=An infection has been found Date/time of event = 2020-10-26 16:44:22 Event Severity Level = xxxx Scan Rule = xx yy zz URL = no_path File name = yy.com File status = xxxxxx Component name = xxxxxx.com Component disposition = abc Virus name = abc xxx yyy Virus ID = 00000 Virus definitions = 000000.000 Client IP = xxx.xx.xxx.xx Scan Duration (sec) = x.xxx Connect Duration (sec) = x.xxx Symantec Protection Engine IP address = xx.xxx.xxx.xxx Symantec Protection Engine Port number = xxxx Uptime (in seconds) = xxxxxxx Uber Category = xyz Sub Category Name = abc Sub Category ID = c Sub Category Description = Programs that infect other programs, files, or areas of a computer by inserting themselves or attaching themselves to that medium. Cumulative Risk Rating = xyz Performance impact = xyz Privacy impact = xyz Ease of removal = xyz Stealth = xyz Date/time of event(with millisec) = 2020-10-26 16:44:22:617 Symantec Protection Engine Host Name = xxxxxx   So if i use the below mentioned props & transforms I can able to extract the fields till "Message"  only and after which the fields are not getting extracted so kindly help to check and update my regex so that i should be able to extract all the fields post Message field in the log. i.e. Till Message field there is no "space" included but after the Message field we can see space character is allowed. props.conf [yoursourcetype] REPORT-ZZcustom_msg_kv = custom_msg_kv transforms.conf [custom_msg_kv] SOURCE_KEY = message REGEX = ([a-zA-Z]\w+)=(.*?)(?=\s+[a-zA-Z]\w+=|$) FORMAT = $1::$2   Also correct me if the props and transforms are correct and the format is also correct or not? @FrankVl  Kindly help on my query.