All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello Thanks I tried but getting this error  Error in 'SearchParser': Missing a search command before '^'. Error at position '461' of search query 'search index=abc source="http:clhub-preprod" "... See more...
@gcusello Thanks I tried but getting this error  Error in 'SearchParser': Missing a search command before '^'. Error at position '461' of search query 'search index=abc source="http:clhub-preprod" "bt-f...{snipped} {errorcontext = taskName>[^\"]+)"}'.
Hi @kranthimutyala2 , as also @yuanliu hinted, in Splunk you must use three backslashes instead 2 as in regex101: | rex field=event "{\\\n \\\"Task Name\\\": \\"(?<taskName>[^\"]+)" It's a differe... See more...
Hi @kranthimutyala2 , as also @yuanliu hinted, in Splunk you must use three backslashes instead 2 as in regex101: | rex field=event "{\\\n \\\"Task Name\\\": \\"(?<taskName>[^\"]+)" It's a difference: I opened a casefor a different behavios than the documentation, so the documentation was modified! I don't knw why, Splunk Project doesn't want to solve it! Ciao. Giuseppe
| inputlookup [| makeresults | eval search="audit_fisma".strftime(relative_time(now(), "@w-1w"), "%m%d").".csv" | table search]
@yuanliu @gcusello Im using rex field=event "{\\n \\"Task Name\\": \\"(?<taskName>[^\"]+)\\"" its working in Regex101 but not working in Splunk
@yuanliu this im able to extract but I need field values for Task Name, Action Name, DetailText etc  
Hello, has the problem been solved? If so, could you share how it was solved?
Hello, I am in need of some help from the community. Is it possible to create a  token in a schedule report and create a trends. I have a file that gets upload loaded every 2 weeks called audit_fi... See more...
Hello, I am in need of some help from the community. Is it possible to create a  token in a schedule report and create a trends. I have a file that gets upload loaded every 2 weeks called audit_fimsa(month/date). Every 2 weeks the file name will stay the same but the month and date will change. For example audit_fisma0409.csv. I have 6 different fields that will need to be compared based of the current week and the previous week.  Do I also have to create a report for each field and trends? Here is a sample of the query below that I am working on. This drafted query reflect the week of 04/09 and 03/28. My goal is to create a report that will automatically pull the file based off the new files that get uploaded every 2 weeks. So that I don't have to manually change the dates. I hope this was enough information.   | inputlookup audit_fisma0409.csv | table "Security Review Completion Date" | replace -* with NA in "Security Review Completion Date" | eval time2=if('Security Review Completion Date'<relative_time(now(),"-1Y"),"Expired","Not_expired") | stats count by time2 | where time2="Expired" | append [ | inputlookup audit_fisma0328.csv | table "Security Review Completion Date" | replace -* with NA in "Security Review Completion Date" | eval time2=if('Security Review Completion Date'<relative_time(now(),"-1Y"),"Expired","Not_expired") | stats count by time2 | where time2="Expired"] | transpose | where column="count" | eval "Security Review Completed" =round('row 1'/'row 2'-1,2) | eval "Security Review Completed" =round('Security Review Completed' * 100, 0) | eval _time=strftime(now(),"%m/%d/%Y") | table "Security Review Completed" _time
Hi @munang , answering to your questions: 1) you'll have a 1 year data in your DM, if you have 1 year data in your indexes, you'll load them in the DM, if you have less period data, you'll load a... See more...
Hi @munang , answering to your questions: 1) you'll have a 1 year data in your DM, if you have 1 year data in your indexes, you'll load them in the DM, if you have less period data, you'll load all the data and maintain them for 1 year. 2) I don't fully understand your question: you load in the DM the last 5 minutes data every 5 minutes; when your data exceed the retention period, they will be deleted. Ciao. Giuseppe
It is an AIO, search head + indexer, I think the disk performance is also lacking hence the issue.
Thank you so much for such a detailed addition @PickleRick ! 
Hi @splunky_diamond , the only difference is that, if you locate the server.conf in the $SPLUNK_HOME/etc/system/local, you cannot manage it using a Deployment Server, if instead you put this file in... See more...
Hi @splunky_diamond , the only difference is that, if you locate the server.conf in the $SPLUNK_HOME/etc/system/local, you cannot manage it using a Deployment Server, if instead you put this file in one app deployed by DS, you can apply updated and modified configurations using the DS. There isn't any othe difference. Ciao. Giuseppe
yes corrected its only working for where message="executed" but not where message values are different for other ID's. please be noted that massage value could be anything for IDs and values of state... See more...
yes corrected its only working for where message="executed" but not where message values are different for other ID's. please be noted that massage value could be anything for IDs and values of state field are same.
Hi @yh  if your Forwarder is overloaded (especially if you have many events and many transformations to apply) you risk to lose events, for this reason, it's better to use rsyslog, writing on disk t... See more...
Hi @yh  if your Forwarder is overloaded (especially if you have many events and many transformations to apply) you risk to lose events, for this reason, it's better to use rsyslog, writing on disk the file to read. Then, if you have a performant disk in the Heavy Forwarder  (at least 800 IOPS) you could apply parallel pipeline (https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Pipelinesets). at least, you could add more CPUs to your HF. Ciao. Giuseppe
Hi @wangyu , did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Clusterdeploymentoverview and https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSear... See more...
Hi @wangyu , did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Clusterdeploymentoverview and https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/SHCdeploymentoverview ? I suppose that you checked the connections between the members al the required ports: IDX replication: by default 9100, SHC replication 9200, connection between IDXs and Cluster Manager 8089, connection between SHs and Deployer 8089, connection between SHs and IDXs 8089. Then, how many SHs do you have in your SHC? they must be at least 3. Ciao. Giuseppe
Not sure why you are doing all those appends/makeresults - but look at your id field - the streamstats logic uses ID, not id - fields are case sensitive  
I deployed the search header cluster and also deployed the indexer cluster, and merged the search header cluster and the indexer cluster. After downloading the sample data and uploading it to the ind... See more...
I deployed the search header cluster and also deployed the indexer cluster, and merged the search header cluster and the indexer cluster. After downloading the sample data and uploading it to the indexer, all members of the indexer cluster can search for the uploaded data. When searching for members in the header cluster, there are two that cannot be searched for the uploaded data, and one that can be searched. "Unable to distribute to peer named 192.168.44.159 at uri=192.168.44.159:8089 using the uri scheme=https because peer has status=Down. Verify uri scheme, connectivity to the search peer, that the search peer is up, and that an equivalent level of system resources are available. See the Troubleshooting Manual for more information."
Could anyone please help me to troubleshoot this issue? I need this to be fixed as soon as possible.
thanks i tried with using extracting the request and response using rex _raw and filtering the fields using spath
is there any other way of handling json content for using rex command which would be much easier. although my request is not completely in a json format.
| makeresults | eval state="started" | eval message="executed" |eval id="101" |append [| makeresults | eval state="inprogess" | eval message="null" |eval id="101"] |append [| makeresults | e... See more...
| makeresults | eval state="started" | eval message="executed" |eval id="101" |append [| makeresults | eval state="inprogess" | eval message="null" |eval id="101"] |append [| makeresults | eval state="completed" | eval message="none" |eval id="101"] |append [| makeresults | eval state="started" | eval message="activity printed " |eval id="102"] |append [| makeresults | eval state="inprogess" | eval message="null" |eval id="102"] |append [| makeresults | eval state="completed" | eval message="none" |eval id="102"]| eval needs_fill=if(message="executed" AND state="started", 1, 0) | streamstats max(needs_fill) as needs_fill by ID | eval message=if(needs_fill=1 AND state="completed", "executed", message) its not working as expected, as mentioned value of massage field is vary per ID's only value of state field remains same for all ID's