All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_tim... See more...
Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_time < relative_time(now(),"-2d"), "Last 48 Hours", "Today") | fields - _time _span | transpose 0 header_field=date column_name=policy | eval "New version"=if('Last 48 Hours' == Today, null(), Today)
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manu... See more...
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manual_entry>.*)" if you didn't extracted the field manual_entry and the format is always the one you descripted in your samples, you could use: | rex "^\#\d+\s(?<manual_entry>.*)"  Ciao. Giuseppe
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business proce... See more...
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business processes into the same logfile. Now i need entries of another process with different ACL in a different index from that logfile but in our QS cluster while the first datainput still ingests into our PROD cluster So i have my inputs.conf [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 1> sourcetype = <dataspecific sourcetype 1> a props.conf [<dataspecific sourcetype 1>] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1500 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = [%y/%m/%d %H:%M:%S] TRANSFORMS-set = setnull, setparsing and a transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = (<specific regex>) DEST_KEY = queue FORMAT = indexQueue As standalone Stanza i would need the new input like this, with its own setparsing transforms [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 2> sourcetype = <dataspecific sourcetype 2> _TCP_ROUTING = qs_cluster   to be honest i could just create a second stanza thats a little different and still reads the same file, but i dont want two tailreader on the same file. What possibilities do i have? Thanks in advance
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is r... See more...
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is required to have the menu option populated. Doc - https://docs.splunk.com/Documentation/Splunk/9.3.0/Viz/DashboardPDFs#Schedule_PDF_delivery    Thanks, Tejas.   --- If the above solution works, an upvote is appreciated !!
Hey, were you able to find the resolution on this?
meet the same problem, looks Outlier Chart does not support dill down officially, or maybe need secondly development.
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to anothe... See more...
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to another dashboard? and the eval isOutlier should be | eval isOutlier=if('residual' < lowerBound OR 'residual' > upperBound, 1, 0)
Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2... See more...
Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2400
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-... See more...
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-2400
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" poli... See more...
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" policy name ("policy_n_") from Last_48_Hours_Policy_Names Append the search of today for the desired events Extract the Policy_Name field into Today_Policy_Names Extract the "root" policy name ("policy_n_") from Today_48_Hours_Policy_Names Regroup the results on the root policy name field Discard the root policy name field Compare Last_48_Hours_Policy_Names to Today_48_Hours_Policy_Names.  If different, set New_Policy_Names to Today_Policy_Names
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)... See more...
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)"  
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and r... See more...
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and randomly trying things. #1724872356 exit #1724872357 exit #1724872463 cat .bashrc #1724872485 sudo cat /etc/profile.d/join-timestamp-history.sh #1724872512 exit #1724877740 firefox   manual_entry exit exit cat .bashrc sudo cat /etc/profile.d/join-timestamp-history.sh exit firefox    
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other sy... See more...
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other system as syslog data to Heavy forwarder  i have configured also the same index on HF at the cluster master and pushed that to all indexers but when i'm looking for that index in SH ( Search Head ) there is no result    can someone help me please ...   Thanks
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more... See more...
Hi @yuanliu  Thanks for the suggestion. The option keepempty=true is something new I learned, I wish stats value() also has that option. However, when I tried keepempty=true, it added a lot more delay (3x) compare to using only dedup, perhaps maybe because I have so many fields. I've been using fillnull to keep empty field. The reason is although one field is empty, I still want to keep the other field. Your way of using foreach to re-assigned the field to null() is awesome. Thanks for showing me this trick.  Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. I appreciate your help.  Thanks
Hi @Gopikrishnan.Ravindran, Sorry for the late reply here. I've passed your email onto the CX team. Someone will be in touch with your shortly as they have some followup questions for you.
@PickleRick @ Events are indexed, but fields are not extracted for the same day. For other days, there is no problem
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) hos... See more...
Thanks @PickleRick for answering.  This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) [| tstats count WHERE index=os_* (source=* OR sourcetype=*) host IN ( $servers_entered$ ) by host | dedup host | eval host=host+"*" | table host] | dedup host | eval sourcetype=if((sourcetype == "linux_secure"),sourcetype,source) | fillnull value="" | table host, index, sourcetype, _raw
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as loo... See more...
Upper/lowercase doesn't matter with search term. Splunk matches case-insensitively (with search command; where command is case-sensitive). And looking for something is definitely not the same as looking for something*.
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that wi... See more...
@bowesmana, @gcusello, and @yuanliu thanks for the responses.  This has been shelved due to funding issues.  If it gets funded, we will go back to the vendor and see if they can add something that will say this is new or timestamp it so we can keep track that way.
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection ... See more...
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection = ABC_PROD disabled = 0 host = 1.1.1.1 index = test index_time_mode = dbColumn interval = 900 mode = rising query = SELECT *\ FROM "mytable"\ WHERE "ID" > ?\ ORDER BY "ID" ASC source = XYZ sourcetype = XYZ:lis input_timestamp_column_number = 28 query_timeout = 60 tail_rising_column_number = 1 max_rows = 10000000 fetch_size = 100000    when i run the query using dbxquery in splunk i do get more than 10k events. Also i tried max_rows = 0 which basically should ingest everything but its not working.   how can I ingest unlimited rows.