All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A more fundamental problem is that by insisting  on using regex for this log, you are treating structured JSON log eilog.EILOG as text string.  It is NOT.   It is much more robust to use Splunk's bui... See more...
A more fundamental problem is that by insisting  on using regex for this log, you are treating structured JSON log eilog.EILOG as text string.  It is NOT.   It is much more robust to use Splunk's built-in, QA tested capabilities to handle structured data.  Have you tried my suggestion | rex "eilog.EILog:\s*(?<eilog>{.+})" | spath input=eilog | spath input=jsonRecord and not getting all data fields in this JSON?   As I illustrated previously, this should give you Task Name Cash Apps PAPI along with dozens of other key-value pairs.
The way I read your premise, this sounds like a transaction logic.  So, let me first clarify your use case. You data look like _time id message state 1969-12-31 16:00:00 101 executed st... See more...
The way I read your premise, this sounds like a transaction logic.  So, let me first clarify your use case. You data look like _time id message state 1969-12-31 16:00:00 101 executed started 1969-12-31 16:00:04 102 activity printed started 1969-12-31 16:00:09 101 null in progress 1969-12-31 16:00:10 102 null in progress 1969-12-31 16:00:18 102 none completed 1969-12-31 16:00:24 101 none completed Note I added some time interleave between 101 and 102 to make the transaction nature more obvious. (Never mind the date is from 1969; that is just for ease of emulation.)  You want to use some results like _time duration eventcount id message state 1969-12-31 16:00:04 14 3 102 activity printed completed<-in progress<-started 1969-12-31 16:00:00 24 3 101 executed completed<-in progress<-started Here, I ignored the format of the expected output in your earlier comment, just want to clarify that "state" goes through "started", "in progress", and "completed" to form a transaction for each unique "id".  Your material requirement is to obtain a single value for "message" that is NEITHER "null" nor "none".  Is this correct?  The result as illustrated here can be obtained with   | transaction id startswith="state=started" endswith="state=completed" | eval message = mvfilter(NOT message IN ("none", "null")) | eval state = mvjoin(state, "<-")   The first two commands literally implements my interpretation of your intentions.  The third line is just a visual element to make state transition obvious for each . In my mind, the above results table is sufficient, and is more representative of the problem.  But if you really want to list each event, like _time id message state 1969-12-31 16:00:00 101 executed started 1969-12-31 16:00:04 102 activity printed started 1969-12-31 16:00:09 101 executed in progress 1969-12-31 16:00:10 102 activity printed in progress 1969-12-31 16:00:18 102 activity printed completed 1969-12-31 16:00:24 101 executed completed You can either use eventstats   | eventstats values(message) as message by id| eval message = mvfilter(NOT message IN ("none", "null")) | eval message = mvfilter(NOT message IN ("none", "null"))   or streamstats as @bowesmana suggested   | streamstats values(message) as message by id| eval message = mvfilter(NOT message IN ("none", "null")) | eval message = mvfilter(NOT message IN ("none", "null"))   To emulate input, I added _time into @bowesmana's formula because it's just simpler.   | makeresults format=csv data="id,message,state,_time 101,executed,started,0 102,activity printed,started,4 101,null,in progress,9 102,null,in progress,10 102,none,completed,18 101,none,completed,24" | eval _raw = "doesn't matter" ``` mock field _raw is important for transaction ``` ``` data mockup above ```        
Is this now solved using answers to this question and your very similar question https://community.splunk.com/t5/Splunk-Search/how-to-retrieve-the-value-from-json-input-using-splunk-query/m-p/686386#... See more...
Is this now solved using answers to this question and your very similar question https://community.splunk.com/t5/Splunk-Search/how-to-retrieve-the-value-from-json-input-using-splunk-query/m-p/686386#M234154?
Hello ,   The Forwarder ingestion latency is showing red on my search head.... Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 54748... See more...
Hello ,   The Forwarder ingestion latency is showing red on my search head.... Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 5474815. Message from 452CE67F-3C57-403C-B7B1-E34754172C83:10.250.2.7:3535 Can anyone please provide any suggestions?  
| makeresults format=csv data="ID,message,state 101,executed,started 101,null,in progress 101,none,completed 102,activity printed,started 102,null,in progress 102,none,completed" | eval startedMessag... See more...
| makeresults format=csv data="ID,message,state 101,executed,started 101,null,in progress 101,none,completed 102,activity printed,started 102,null,in progress 102,none,completed" | eval startedMessage=if(state=="started",message,null()) | eventstats values(startedMessage) as startedMessage by ID | eval message=if(state=="completed", startedMessage, message)
@gcusello Thanks I tried but getting this error  Error in 'SearchParser': Missing a search command before '^'. Error at position '461' of search query 'search index=abc source="http:clhub-preprod" "... See more...
@gcusello Thanks I tried but getting this error  Error in 'SearchParser': Missing a search command before '^'. Error at position '461' of search query 'search index=abc source="http:clhub-preprod" "bt-f...{snipped} {errorcontext = taskName>[^\"]+)"}'.
Hi @kranthimutyala2 , as also @yuanliu hinted, in Splunk you must use three backslashes instead 2 as in regex101: | rex field=event "{\\\n \\\"Task Name\\\": \\"(?<taskName>[^\"]+)" It's a differe... See more...
Hi @kranthimutyala2 , as also @yuanliu hinted, in Splunk you must use three backslashes instead 2 as in regex101: | rex field=event "{\\\n \\\"Task Name\\\": \\"(?<taskName>[^\"]+)" It's a difference: I opened a casefor a different behavios than the documentation, so the documentation was modified! I don't knw why, Splunk Project doesn't want to solve it! Ciao. Giuseppe
| inputlookup [| makeresults | eval search="audit_fisma".strftime(relative_time(now(), "@w-1w"), "%m%d").".csv" | table search]
@yuanliu @gcusello Im using rex field=event "{\\n \\"Task Name\\": \\"(?<taskName>[^\"]+)\\"" its working in Regex101 but not working in Splunk
@yuanliu this im able to extract but I need field values for Task Name, Action Name, DetailText etc  
Hello, has the problem been solved? If so, could you share how it was solved?
Hello, I am in need of some help from the community. Is it possible to create a  token in a schedule report and create a trends. I have a file that gets upload loaded every 2 weeks called audit_fi... See more...
Hello, I am in need of some help from the community. Is it possible to create a  token in a schedule report and create a trends. I have a file that gets upload loaded every 2 weeks called audit_fimsa(month/date). Every 2 weeks the file name will stay the same but the month and date will change. For example audit_fisma0409.csv. I have 6 different fields that will need to be compared based of the current week and the previous week.  Do I also have to create a report for each field and trends? Here is a sample of the query below that I am working on. This drafted query reflect the week of 04/09 and 03/28. My goal is to create a report that will automatically pull the file based off the new files that get uploaded every 2 weeks. So that I don't have to manually change the dates. I hope this was enough information.   | inputlookup audit_fisma0409.csv | table "Security Review Completion Date" | replace -* with NA in "Security Review Completion Date" | eval time2=if('Security Review Completion Date'<relative_time(now(),"-1Y"),"Expired","Not_expired") | stats count by time2 | where time2="Expired" | append [ | inputlookup audit_fisma0328.csv | table "Security Review Completion Date" | replace -* with NA in "Security Review Completion Date" | eval time2=if('Security Review Completion Date'<relative_time(now(),"-1Y"),"Expired","Not_expired") | stats count by time2 | where time2="Expired"] | transpose | where column="count" | eval "Security Review Completed" =round('row 1'/'row 2'-1,2) | eval "Security Review Completed" =round('Security Review Completed' * 100, 0) | eval _time=strftime(now(),"%m/%d/%Y") | table "Security Review Completed" _time
Hi @munang , answering to your questions: 1) you'll have a 1 year data in your DM, if you have 1 year data in your indexes, you'll load them in the DM, if you have less period data, you'll load a... See more...
Hi @munang , answering to your questions: 1) you'll have a 1 year data in your DM, if you have 1 year data in your indexes, you'll load them in the DM, if you have less period data, you'll load all the data and maintain them for 1 year. 2) I don't fully understand your question: you load in the DM the last 5 minutes data every 5 minutes; when your data exceed the retention period, they will be deleted. Ciao. Giuseppe
It is an AIO, search head + indexer, I think the disk performance is also lacking hence the issue.
Thank you so much for such a detailed addition @PickleRick ! 
Hi @splunky_diamond , the only difference is that, if you locate the server.conf in the $SPLUNK_HOME/etc/system/local, you cannot manage it using a Deployment Server, if instead you put this file in... See more...
Hi @splunky_diamond , the only difference is that, if you locate the server.conf in the $SPLUNK_HOME/etc/system/local, you cannot manage it using a Deployment Server, if instead you put this file in one app deployed by DS, you can apply updated and modified configurations using the DS. There isn't any othe difference. Ciao. Giuseppe
yes corrected its only working for where message="executed" but not where message values are different for other ID's. please be noted that massage value could be anything for IDs and values of state... See more...
yes corrected its only working for where message="executed" but not where message values are different for other ID's. please be noted that massage value could be anything for IDs and values of state field are same.
Hi @yh  if your Forwarder is overloaded (especially if you have many events and many transformations to apply) you risk to lose events, for this reason, it's better to use rsyslog, writing on disk t... See more...
Hi @yh  if your Forwarder is overloaded (especially if you have many events and many transformations to apply) you risk to lose events, for this reason, it's better to use rsyslog, writing on disk the file to read. Then, if you have a performant disk in the Heavy Forwarder  (at least 800 IOPS) you could apply parallel pipeline (https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Pipelinesets). at least, you could add more CPUs to your HF. Ciao. Giuseppe
Hi @wangyu , did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Clusterdeploymentoverview and https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSear... See more...
Hi @wangyu , did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Clusterdeploymentoverview and https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/SHCdeploymentoverview ? I suppose that you checked the connections between the members al the required ports: IDX replication: by default 9100, SHC replication 9200, connection between IDXs and Cluster Manager 8089, connection between SHs and Deployer 8089, connection between SHs and IDXs 8089. Then, how many SHs do you have in your SHC? they must be at least 3. Ciao. Giuseppe
Not sure why you are doing all those appends/makeresults - but look at your id field - the streamstats logic uses ID, not id - fields are case sensitive