All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-gather-mobile-crash-stack-traces-using-the-API/ta-p/25622 I found this rather old post above, but it doesn't seem to be working.  Essenti... See more...
https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-gather-mobile-crash-stack-traces-using-the-API/ta-p/25622 I found this rather old post above, but it doesn't seem to be working.  Essentially, what we would like to do is this: - For the last x amount of time, programmatically gather mobile crash data (crash location, stack trace if available) I've tried using the commands from the post above, but our users only have a client secret.  The cookie doesn't seem to be valid. $ curl -X GET -c /tmp/cookie --user <username>:<clientsecret> "https://<controller domain>.saas.appdynamics.com/controller/auth?action=login" $ cat /tmp/cookie # Netscape HTTP Cookie File # https://curl.se/docs/http-cookies.html # This file was generated by libcurl! Edit at your own risk. <controller domain>.saas.appdynamics.com  FALSE  /controller  TRUE  0  JSESSIONID  node0sjew4rhlgia01p578fkyqraqw146492667.node0 $ curl -X POST -b /tmp/cookie  https://<controller domain>.saas.appdynamics.com/controller/restui/analyticsCrashDashboardUiService/getMobileCrashGroups <!DOCTYPE html> <html lang="en"> <head>     <meta charset="UTF-8">     <title>Unauthorized</title> </head> <body> HTTP Error 401 Unauthorized <p/> This request requires HTTP authentication </body> </html>
Is this a big app (e.g. bigger than 500MB)? If so, it takes time to upload using the webUI and you'll have to adjust the max upload size and timeout amount. If you use developer tools (F12) and go t... See more...
Is this a big app (e.g. bigger than 500MB)? If so, it takes time to upload using the webUI and you'll have to adjust the max upload size and timeout amount. If you use developer tools (F12) and go to Network, do you see any failed requests, or any responses that would indicate an error? Are you able to use the CLI of the server running Splunk Enterprise (if you are using on-prem) to un-tar the app into the /opt/splunk/etc/apps directory? This would bypass validation, but would install the app.
It seems that you are using the opc_t token as a keyword search in the first line, and then as a field filter in the appended search. Does it work when you use opc=$opc_t|s$ as the filter in your fir... See more...
It seems that you are using the opc_t token as a keyword search in the first line, and then as a field filter in the appended search. Does it work when you use opc=$opc_t|s$ as the filter in your first line?     <query>index="pm-azlm_internal_prod_events" sourcetype="azlm" opc=$opc_t|s$ $framenum$ | strcat opc "_" frame_num UNIQUE_ID | dedup _time UNIQUE_ID | append [ search index="pm-azlm_internal_dev_events" sourcetype="azlm-dev" ocp=$opc_t|s$ | strcat ocp "-j_" fr as UNIQUE_ID | dedup UNIQUE_ID] | timechart span=12h aligntime=@d limit=0 count by UNIQUE_ID | sort by _time DESC </query>     A good way to debug this is to click the magnifying glass in the lower-right part of the panel to launch the search with the current value of the opc_t token. It may result in a bad filter which removes all your search results, which can then be adjusted so it does not remove all results. (note: put the token between double-quotes if it can contain a space character)  (As bowesmana said, this is not necessary)
Just use PowerConnect! It uses native API calls to the kernel to extract DB and OS layer information on top of 300 out of the box extractors for SAP monitoring and security data.
The answer is very easy: PowerConnect. PowerConnect sends full-fidelity SAP data to Splunk, where you can correlate it easily via search.
I disagree with the solution suggested. Why not use something out of the box, like PowerConnect, to send the data to Splunk? You can do it directly from the systems reporting to ALM. PowerConnect is ... See more...
I disagree with the solution suggested. Why not use something out of the box, like PowerConnect, to send the data to Splunk? You can do it directly from the systems reporting to ALM. PowerConnect is fully supported for ABAP, JAVA, and most SAP SaaS offerings.
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP a... See more...
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP address in this file, I want to check the Splunk logs for the following conditions in the index=fow_checkin: There is a message containing "IpAddress(from request body)" There is no message associated with the same IP address that contains display button:panel-* in other events. Example Log Entries: message: Display Option Request Source: TouchPanel, IpAddress(from request body): null, Action: buttonDisplay, Timezone: null and IpAddress(from request header): 17.200.126.20 message: display button:panel-takeover for ipaddress: 17.200.126.20 Could someone please guide me on how to construct this query to identify which IP addresses from the lookup file meet these criteria? Thanks in advance
Hi @yuanliu  It worked perfectly! Thank you so much for your help; you’ve saved me a great deal of time. I had been struggling for several days to implement this logic to create an alert, and no... See more...
Hi @yuanliu  It worked perfectly! Thank you so much for your help; you’ve saved me a great deal of time. I had been struggling for several days to implement this logic to create an alert, and now that I have an efficient approach, I’m happy to accept this as the solution. Thanks again for your support!
We have no idea what your events look like and what is your configuration so we can't know how and why the fields are (not) extracted. Most probably your sourcetype is misconfigured and doesn't extr... See more...
We have no idea what your events look like and what is your configuration so we can't know how and why the fields are (not) extracted. Most probably your sourcetype is misconfigured and doesn't extract the fields or the extractions aren't configured at all and Splunk relies on its automatic extractions which your events might not completely fit into.
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consump... See more...
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consumption of 100GB per day. In pool B, data from an F5 index is forwarded to the indexer in pool A. My license consumption has increased to over 800GB total consumption. My question is: Is forwarding data from indexer B to indexer A causing me to consume more license? Would it help if I change the configuration to a single pool?  
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 colu... See more...
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 column.   I've double checked that there is definitely data captured within that field by looking at the events. There are 20 similarly named fields using the format S42DSN_00## which are found within the raw event data. Only the first 8 return results using the above query. For example the following works fine:   | table S42DSN_0001 S42DSN_0002   Any thoughts on why this might be happening? I am wondering if events past iteration S42DSN_0008 are not considered interesting, so Splunk is leaving them out of the results? Oddly enough, if I change my time period to the past 30 days, and use S42DSN_0010=* as a search criteria, I receive some, but not all results within that column. Thanks in advance, Trevor
Does Splunk parse the time correctly on its own? Try comparing the extracted time of the event with the time in the raw text of the event. If they are the same and/or adjusted for timezone, then you... See more...
Does Splunk parse the time correctly on its own? Try comparing the extracted time of the event with the time in the raw text of the event. If they are the same and/or adjusted for timezone, then you are good to go. If they are consistently different with one or more hours between them, then it is likely a timezone issue that can be fixed using props.conf If they are variably different, then it could be a timestamp extraction issue.
Noone will tell you what to expect since exam takers are under NDA. One can only say that the track flowchart from https://www.splunk.com/en_us/training/certification-track/splunk-core-certified-user... See more...
Noone will tell you what to expect since exam takers are under NDA. One can only say that the track flowchart from https://www.splunk.com/en_us/training/certification-track/splunk-core-certified-user.html says it pretty well.
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as wel... See more...
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as well as taking some udemy courses geared toward the exam. Anything helps!  
There is no "one size fits all" response to such question. In different organizations those roles can perform different tasks and need to have different access levels to the Splunk infrastructure. Th... See more...
There is no "one size fits all" response to such question. In different organizations those roles can perform different tasks and need to have different access levels to the Splunk infrastructure. The capabilities will also differ depending on what products and apps you are using.
You can use the following segment to make the alert trigger even when its search returns zero events: <yoursearch> | appendpipe [ stats count | eval description="No problems found. All is ... See more...
You can use the following segment to make the alert trigger even when its search returns zero events: <yoursearch> | appendpipe [ stats count | eval description="No problems found. All is well!" | where count = 0 | fields - count] If there are results from the initial search, then this segment does not change the results. But if there are no results from the initial search, this segment will create a single row with a single field of "description" containing the string.
Your rex commands do not seem to contain any named capture groups so how are your files extracted?
I think you'll have to elaborate on what you think people with those roles will do in Splunk. Will they be viewing "management overview" dashboards? Will they be using Splunk searches to find specifi... See more...
I think you'll have to elaborate on what you think people with those roles will do in Splunk. Will they be viewing "management overview" dashboards? Will they be using Splunk searches to find specific threats or issues? Will they be customizing Splunk by editing knowledge objects like field extractions and lookups?
Patching vulnerabilities is a bit different thing than support levels but I would expect Splunk to provide vulnerability fixes during the support period as it has so far (i.e. 9.1.5 was released Jul ... See more...
Patching vulnerabilities is a bit different thing than support levels but I would expect Splunk to provide vulnerability fixes during the support period as it has so far (i.e. 9.1.5 was released Jul 1 this year). One correction - support period counts from the 9.2.0 release date, not 9.2.2 EDIT: Just so that we're clear - I'm in no way affiliated with Splunk Inc. and this is just my personal view and prediction. If you want an official Splunk standing, ask your sales representative or support.
1- Point taken, was not a demand but a request not sure how could  have I framed it to look like a request will avoid tagging people 2-Did that thanks for the feedback 3/4/5- Data is getting ext... See more...
1- Point taken, was not a demand but a request not sure how could  have I framed it to look like a request will avoid tagging people 2-Did that thanks for the feedback 3/4/5- Data is getting extracted properly but for the systime_mcd  which is null for all the correlation-ids.