All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I disagree with the solution suggested. Why not use something out of the box, like PowerConnect, to send the data to Splunk? You can do it directly from the systems reporting to ALM. PowerConnect is ... See more...
I disagree with the solution suggested. Why not use something out of the box, like PowerConnect, to send the data to Splunk? You can do it directly from the systems reporting to ALM. PowerConnect is fully supported for ABAP, JAVA, and most SAP SaaS offerings.
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP a... See more...
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP address in this file, I want to check the Splunk logs for the following conditions in the index=fow_checkin: There is a message containing "IpAddress(from request body)" There is no message associated with the same IP address that contains display button:panel-* in other events. Example Log Entries: message: Display Option Request Source: TouchPanel, IpAddress(from request body): null, Action: buttonDisplay, Timezone: null and IpAddress(from request header): 17.200.126.20 message: display button:panel-takeover for ipaddress: 17.200.126.20 Could someone please guide me on how to construct this query to identify which IP addresses from the lookup file meet these criteria? Thanks in advance
Hi @yuanliu  It worked perfectly! Thank you so much for your help; you’ve saved me a great deal of time. I had been struggling for several days to implement this logic to create an alert, and no... See more...
Hi @yuanliu  It worked perfectly! Thank you so much for your help; you’ve saved me a great deal of time. I had been struggling for several days to implement this logic to create an alert, and now that I have an efficient approach, I’m happy to accept this as the solution. Thanks again for your support!
We have no idea what your events look like and what is your configuration so we can't know how and why the fields are (not) extracted. Most probably your sourcetype is misconfigured and doesn't extr... See more...
We have no idea what your events look like and what is your configuration so we can't know how and why the fields are (not) extracted. Most probably your sourcetype is misconfigured and doesn't extract the fields or the extractions aren't configured at all and Splunk relies on its automatic extractions which your events might not completely fit into.
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consump... See more...
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consumption of 100GB per day. In pool B, data from an F5 index is forwarded to the indexer in pool A. My license consumption has increased to over 800GB total consumption. My question is: Is forwarding data from indexer B to indexer A causing me to consume more license? Would it help if I change the configuration to a single pool?  
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 colu... See more...
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 column.   I've double checked that there is definitely data captured within that field by looking at the events. There are 20 similarly named fields using the format S42DSN_00## which are found within the raw event data. Only the first 8 return results using the above query. For example the following works fine:   | table S42DSN_0001 S42DSN_0002   Any thoughts on why this might be happening? I am wondering if events past iteration S42DSN_0008 are not considered interesting, so Splunk is leaving them out of the results? Oddly enough, if I change my time period to the past 30 days, and use S42DSN_0010=* as a search criteria, I receive some, but not all results within that column. Thanks in advance, Trevor
Does Splunk parse the time correctly on its own? Try comparing the extracted time of the event with the time in the raw text of the event. If they are the same and/or adjusted for timezone, then you... See more...
Does Splunk parse the time correctly on its own? Try comparing the extracted time of the event with the time in the raw text of the event. If they are the same and/or adjusted for timezone, then you are good to go. If they are consistently different with one or more hours between them, then it is likely a timezone issue that can be fixed using props.conf If they are variably different, then it could be a timestamp extraction issue.
Noone will tell you what to expect since exam takers are under NDA. One can only say that the track flowchart from https://www.splunk.com/en_us/training/certification-track/splunk-core-certified-user... See more...
Noone will tell you what to expect since exam takers are under NDA. One can only say that the track flowchart from https://www.splunk.com/en_us/training/certification-track/splunk-core-certified-user.html says it pretty well.
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as wel... See more...
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as well as taking some udemy courses geared toward the exam. Anything helps!  
There is no "one size fits all" response to such question. In different organizations those roles can perform different tasks and need to have different access levels to the Splunk infrastructure. Th... See more...
There is no "one size fits all" response to such question. In different organizations those roles can perform different tasks and need to have different access levels to the Splunk infrastructure. The capabilities will also differ depending on what products and apps you are using.
You can use the following segment to make the alert trigger even when its search returns zero events: <yoursearch> | appendpipe [ stats count | eval description="No problems found. All is ... See more...
You can use the following segment to make the alert trigger even when its search returns zero events: <yoursearch> | appendpipe [ stats count | eval description="No problems found. All is well!" | where count = 0 | fields - count] If there are results from the initial search, then this segment does not change the results. But if there are no results from the initial search, this segment will create a single row with a single field of "description" containing the string.
Your rex commands do not seem to contain any named capture groups so how are your files extracted?
I think you'll have to elaborate on what you think people with those roles will do in Splunk. Will they be viewing "management overview" dashboards? Will they be using Splunk searches to find specifi... See more...
I think you'll have to elaborate on what you think people with those roles will do in Splunk. Will they be viewing "management overview" dashboards? Will they be using Splunk searches to find specific threats or issues? Will they be customizing Splunk by editing knowledge objects like field extractions and lookups?
Patching vulnerabilities is a bit different thing than support levels but I would expect Splunk to provide vulnerability fixes during the support period as it has so far (i.e. 9.1.5 was released Jul ... See more...
Patching vulnerabilities is a bit different thing than support levels but I would expect Splunk to provide vulnerability fixes during the support period as it has so far (i.e. 9.1.5 was released Jul 1 this year). One correction - support period counts from the 9.2.0 release date, not 9.2.2 EDIT: Just so that we're clear - I'm in no way affiliated with Splunk Inc. and this is just my personal view and prediction. If you want an official Splunk standing, ask your sales representative or support.
1- Point taken, was not a demand but a request not sure how could  have I framed it to look like a request will avoid tagging people 2-Did that thanks for the feedback 3/4/5- Data is getting ext... See more...
1- Point taken, was not a demand but a request not sure how could  have I framed it to look like a request will avoid tagging people 2-Did that thanks for the feedback 3/4/5- Data is getting extracted properly but for the systime_mcd  which is null for all the correlation-ids.  
SOAR 6.2.2 includes Splunk Forwarder 9.0.9 EDIT: To be specific, Splunk SOAR 6.2.2.134 includes splunkforwarder version 9.0.9 build 6315942c563f
So will v9.2.2 of Splunk Universal Forwarder be updated to close future vulnerabilities between now and the end of Windows Server 2016 extended maintenance?  If so, how will Windows Server 2016 clien... See more...
So will v9.2.2 of Splunk Universal Forwarder be updated to close future vulnerabilities between now and the end of Windows Server 2016 extended maintenance?  If so, how will Windows Server 2016 clients be notified of the alternate stream updates?
Wait a second. You're talking about an UF? And those props are where? On the UF or on the idx/HF? Do you use EVENT_BREAKER?
1. Please don't call out specific people. It's rude. If you demand someone's help you typically pay for consulting services. Here people help in own spare time out of good will. 2. When you post sam... See more...
1. Please don't call out specific people. It's rude. If you demand someone's help you typically pay for consulting services. Here people help in own spare time out of good will. 2. When you post samples and SPL excerpts, please format them properly - in code block or preformatted paragraphs (and use line breaking for SPL) 3.  Did you verify that before you do the stats the  fields you're aggregating are properly extracted? 4. stats values() can produce multivalued fields - trying to treat them as simple integers won't work 5. As you're extracting fields from textual content, you might need to call tonumber() on them to get an integer which you can use to calculate difference.
It's hard to say without knowing the actual files. But generally crcsalting is rarely used. Usually - when the files have relatively long common beginning parts - it's better to increase the size of ... See more...
It's hard to say without knowing the actual files. But generally crcsalting is rarely used. Usually - when the files have relatively long common beginning parts - it's better to increase the size of the header used for crc calculation.