All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other sy... See more...
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other system as syslog data to Heavy forwarder  i have configured also the same index on HF at the cluster master and pushed that to all indexers but when i'm looking for that index in SH ( Search Head ) there is no result    can someone help me please ...   Thanks
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more... See more...
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more delay (3x) compare to using only dedup, perhaps maybe because I have so many fields. I've been using fillnull to keep empty field. The reason is although one field is empty, I still want to keep the other field. Your way of using foreach to re-assigned the field to null() is awesome. Thanks for showing me this trick.  Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. I appreciate your help.  Thanks
Hi @Gopikrishnan.Ravindran, Sorry for the late reply here. I've passed your email onto the CX team. Someone will be in touch with your shortly as they have some followup questions for you.
@PickleRick @ Events are indexed, but fields are not extracted for the same day. For other days, there is no problem
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) hos... See more...
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) host IN ( $servers_entered$ ) by host | dedup host | eval host=host+"*" | table host] | dedup host | eval sourcetype=if((sourcetype == "linux_secure"),sourcetype,source) | fillnull value="" | table host, index, sourcetype, _raw
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as loo... See more...
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as looking for something*.
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that wi... See more...
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that will say this is new or timestamp it so we can keep track that way.
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection ... See more...
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection = ABC_PROD disabled = 0 host = 1.1.1.1 index = test index_time_mode = dbColumn interval = 900 mode = rising query = SELECT *\ FROM "mytable"\ WHERE "ID" > ?\ ORDER BY "ID" ASC source = XYZ sourcetype = XYZ:lis input_timestamp_column_number = 28 query_timeout = 60 tail_rising_column_number = 1 max_rows = 10000000 fetch_size = 100000    when i run the query using dbxquery in splunk i do get more than 10k events. Also i tried max_rows = 0 which basically should ingest everything but its not working.   how can I ingest unlimited rows.
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL look... See more...
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL looks like.     index=os_* (`wineventlog_security` OR sourcetype=linux_secure) host IN ( host1*, host2*, host3*, host4*, host5*, host6*, host7*, host8* ) earliest=-7d@d | dedup host | eval sourcetype=if(sourcetype = "linux_secure", sourcetype, source) | fillnull value="" | table host, index, sourcetype, _raw     If there is no * then there are no results.  What I would like to be able to do is have them enter hostname, FQDN, and either upper or lower case and the SPL would change it to lower case, remove any FQDN parts, add the *, and then search.  So far I haven't come up with SPL that works.  Any thoughts? TIA, Joe
Hi, Please share the configuration documents on panorama side for integrating this app with Splunk SOAR
That's what I thought, too. So I was surprised when I looked at the upgrade path table and did not see it mention upgrading to 6.2.1 first, and then going to 6.2.2 if you started at 6.2.0. 
But are the events indexed but the fields are not extracted or are the events not ingested at all?
You skipped the first point which says that early versions must be upgraded one by one until you reach 4.10.something. From there on you don't have to go version by version, you can skip straight to ... See more...
You skipped the first point which says that early versions must be upgraded one by one until you reach 4.10.something. From there on you don't have to go version by version, you can skip straight to 6.2.1. But you can't upgrade from other version than 6.2.1 to 6.2.2. That's how I read it.
Thats Pretty much exactly what I was looking for, Thank you.
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting... See more...
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting information, and I do not know how to proceed with my ON PREMISE, UNPRIVILEGED, PRIMARY + WARM STANDBY CONFIGURATION (database is on the instances, not external): At the top of the documentation, it states: Unprivileged Splunk SOAR (On-premises) running a release earlier than release 6.2.1 can be upgraded to Splunk SOAR (On-premises) release 6.2.1, and then to release 6.2.2. It says CAN BE. So.... is it optional? All deployments must upgrade to Splunk SOAR (On-premises) 6.2.1 before upgrading to higher releases in order to upgrade the PostgreSQL database. It says MUST UPGRADE. So.... is it mandatory? But then, towards the BOTTOM of the table, I'm looking at the row beginning with the entry stating that I am starting with version "6.2.0" Steps 1 & 2 are conditionals for clustered and external PostGreSQL databases. Step 3 goes directly to upgrading to 6.2.2. So..... Do I, or do I NOT, upgrade to 6.2.1 first? 
Not the most efficient way of doing what? You could improve the performance of the query by combining the first two commands. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR Event... See more...
Not the most efficient way of doing what? You could improve the performance of the query by combining the first two commands. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result Legibility can be improved a little by the IN operator. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") NOT process IN ("C:\\Windows\\System32\\svchost.exe" "C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" "C:\\Program Files\\Common Files\\McAfee\\*" "C:\\Program Files\\McAfee*" "C:\\Windows\\System32\\enstart64.exe" "C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" "C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe") | table _time, source, subject, object_file_path, SubjectUserName, process, result  
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    ind... See more...
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    index=myindex | search (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result   This is an just an example, I do this same way for multiple different fields, indexs  I know its not the most efficient way of doing it but I dont know any better ways. Usually Ill start broad and whittle down the things I know I'm not looking for.  Is there either a way to simplify this (I could possibly do regex but im not really good at that) or something else like this to make my life easier? such as combining all the results I want to filter for one field. Any and all help/criticism is appreciated.
This is the result of the snippet I posted.
I don't have a "saved search" for this query, unfortunately, as I'm not yet able to make an actual "saved search". Just trying to perform some filtering on the results of a search made within the da... See more...
I don't have a "saved search" for this query, unfortunately, as I'm not yet able to make an actual "saved search". Just trying to perform some filtering on the results of a search made within the dashboard without reloading the search. I've attempted what I think it is that you're proposing, but the "PostProcessTable"/"PostProcessSearch", which is supposed to load the job from the "BaseTable"/"BaseSearch" is not loading. Instead, it notes reads, "Waiting for input...".  I will note that I am on Splunk version 9.0.4, and the switch you pointed out "Access search results or metadata" reads as "Use search results or job status as tokens" in my version of Dashboard Studio. I'm not sure if the issue is: my version of splunk being 9.0.4 the fact that I'm not using a saved search or I'm implementing your proposal incorrectly (very very possible) See example snippet below: "visualizations": { "viz_A2Ecjpct": { "type": "splunk.table", "dataSources": { "primary": "ds_fpJiS8Hp" }, "title": "BaseTable" }, "viz_Ok7Uvz2b": { "type": "splunk.table", "title": "PostProcessTable", "dataSources": { "primary": "ds_q4BDo5Wr" } } }, "dataSources": { "ds_fpJiS8Hp": { "type": "ds.search", "options": { "query": "| makeresults count=5", "queryParameters": { "earliest": "-15m", "latest": "now" }, "enableSmartSources": true }, "name": "BaseSearch" }, "ds_q4BDo5Wr": { "type": "ds.search", "options": { "query": "| loadjob $ds_fpJiS8Hp:job.sid$", "enableSmartSources": true }, "name": "PostProcessSearch" } },  
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit... See more...
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit button --> NO 'Edit Schedule' Open dashboard, top right export, NO 'Schedule PDF' My local admin says 'maybe they changed something in 9.0.6), but I'm unconvinced until this legendary community agrees. "feels" like a permission missing is all.