All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2... See more...
Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2400
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-... See more...
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-2400
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" poli... See more...
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" policy name ("policy_n_") from Last_48_Hours_Policy_Names Append the search of today for the desired events Extract the Policy_Name field into Today_Policy_Names Extract the "root" policy name ("policy_n_") from Today_48_Hours_Policy_Names Regroup the results on the root policy name field Discard the root policy name field Compare Last_48_Hours_Policy_Names to Today_48_Hours_Policy_Names.  If different, set New_Policy_Names to Today_Policy_Names
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)... See more...
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)"  
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and r... See more...
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and randomly trying things. #1724872356 exit #1724872357 exit #1724872463 cat .bashrc #1724872485 sudo cat /etc/profile.d/join-timestamp-history.sh #1724872512 exit #1724877740 firefox   manual_entry exit exit cat .bashrc sudo cat /etc/profile.d/join-timestamp-history.sh exit firefox    
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other sy... See more...
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other system as syslog data to Heavy forwarder  i have configured also the same index on HF at the cluster master and pushed that to all indexers but when i'm looking for that index in SH ( Search Head ) there is no result    can someone help me please ...   Thanks
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more... See more...
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more delay (3x) compare to using only dedup, perhaps maybe because I have so many fields. I've been using fillnull to keep empty field. The reason is although one field is empty, I still want to keep the other field. Your way of using foreach to re-assigned the field to null() is awesome. Thanks for showing me this trick.  Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. I appreciate your help.  Thanks
Hi @Gopikrishnan.Ravindran, Sorry for the late reply here. I've passed your email onto the CX team. Someone will be in touch with your shortly as they have some followup questions for you.
@PickleRick @ Events are indexed, but fields are not extracted for the same day. For other days, there is no problem
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) hos... See more...
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) host IN ( $servers_entered$ ) by host | dedup host | eval host=host+"*" | table host] | dedup host | eval sourcetype=if((sourcetype == "linux_secure"),sourcetype,source) | fillnull value="" | table host, index, sourcetype, _raw
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as loo... See more...
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as looking for something*.
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that wi... See more...
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that will say this is new or timestamp it so we can keep track that way.
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection ... See more...
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection = ABC_PROD disabled = 0 host = 1.1.1.1 index = test index_time_mode = dbColumn interval = 900 mode = rising query = SELECT *\ FROM "mytable"\ WHERE "ID" > ?\ ORDER BY "ID" ASC source = XYZ sourcetype = XYZ:lis input_timestamp_column_number = 28 query_timeout = 60 tail_rising_column_number = 1 max_rows = 10000000 fetch_size = 100000    when i run the query using dbxquery in splunk i do get more than 10k events. Also i tried max_rows = 0 which basically should ingest everything but its not working.   how can I ingest unlimited rows.
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL look... See more...
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL looks like.     index=os_* (`wineventlog_security` OR sourcetype=linux_secure) host IN ( host1*, host2*, host3*, host4*, host5*, host6*, host7*, host8* ) earliest=-7d@d | dedup host | eval sourcetype=if(sourcetype = "linux_secure", sourcetype, source) | fillnull value="" | table host, index, sourcetype, _raw     If there is no * then there are no results.  What I would like to be able to do is have them enter hostname, FQDN, and either upper or lower case and the SPL would change it to lower case, remove any FQDN parts, add the *, and then search.  So far I haven't come up with SPL that works.  Any thoughts? TIA, Joe
Hi, Please share the configuration documents on panorama side for integrating this app with Splunk SOAR
That's what I thought, too. So I was surprised when I looked at the upgrade path table and did not see it mention upgrading to 6.2.1 first, and then going to 6.2.2 if you started at 6.2.0. 
But are the events indexed but the fields are not extracted or are the events not ingested at all?
You skipped the first point which says that early versions must be upgraded one by one until you reach 4.10.something. From there on you don't have to go version by version, you can skip straight to ... See more...
You skipped the first point which says that early versions must be upgraded one by one until you reach 4.10.something. From there on you don't have to go version by version, you can skip straight to 6.2.1. But you can't upgrade from other version than 6.2.1 to 6.2.2. That's how I read it.
Thats Pretty much exactly what I was looking for, Thank you.
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting... See more...
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting information, and I do not know how to proceed with my ON PREMISE, UNPRIVILEGED, PRIMARY + WARM STANDBY CONFIGURATION (database is on the instances, not external): At the top of the documentation, it states: Unprivileged Splunk SOAR (On-premises) running a release earlier than release 6.2.1 can be upgraded to Splunk SOAR (On-premises) release 6.2.1, and then to release 6.2.2. It says CAN BE. So.... is it optional? All deployments must upgrade to Splunk SOAR (On-premises) 6.2.1 before upgrading to higher releases in order to upgrade the PostgreSQL database. It says MUST UPGRADE. So.... is it mandatory? But then, towards the BOTTOM of the table, I'm looking at the row beginning with the entry stating that I am starting with version "6.2.0" Steps 1 & 2 are conditionals for clustered and external PostGreSQL databases. Step 3 goes directly to upgrading to 6.2.2. So..... Do I, or do I NOT, upgrade to 6.2.1 first?