All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And at same time it convert that field (result of case) to multivalue field which contains both those values. As @yuanliu said, you must provide sample data which produce that "error", if you want th... See more...
And at same time it convert that field (result of case) to multivalue field which contains both those values. As @yuanliu said, you must provide sample data which produce that "error", if you want that we can help you.
Hi Team, Thanks for being there! I hope you all are doing great! I was working on the requirement to install and monitor Kubernetes using AppDyanamics  I have gone through the video from Cisco ... See more...
Hi Team, Thanks for being there! I hope you all are doing great! I was working on the requirement to install and monitor Kubernetes using AppDyanamics  I have gone through the video from Cisco U https://www.youtube.com/watch?v=RTzMJxzSa9I But I have a question. Do we not need a cluster agent as I don't seem to have used or taken the name of a cluster agent in the process? Could you help me with this?
Please share the searches which are failing
Here are the setting for props.conf   SHOULD_LINEMERGE=false #Should always be false LINE_BREAKER=([\r\n]+)IP #Adds IP to the line breaking (If all lines starts with IP) NO_BINARY_CHECK=tru... See more...
Here are the setting for props.conf   SHOULD_LINEMERGE=false #Should always be false LINE_BREAKER=([\r\n]+)IP #Adds IP to the line breaking (If all lines starts with IP) NO_BINARY_CHECK=true TIME_FORMAT=%e-%m-%y %T #Sets the time format TIME_PREFIX=At: #Use time found after the At: MAX_TIMESTAMP_LOOKAHEAD=20 #Do not search more tha needed for the time  
@gcuselloAs previously stated, I implemented the setting SHOULD_LINEMERGE = false in Splunk Cloud SH, which successfully resolved the issue. However, the logs contain HTML events, which are now being... See more...
@gcuselloAs previously stated, I implemented the setting SHOULD_LINEMERGE = false in Splunk Cloud SH, which successfully resolved the issue. However, the logs contain HTML events, which are now being treated as individual events, resulting in difficulties extracting the desired fields. Could you please advise on how we can address this?
Hi, I have a requirement to upgrade RHEL from version 7.9 to 8.X, and our infrastructure team is currently in the process of building a new set of servers running on RHEL 8.X. Consequently, I will n... See more...
Hi, I have a requirement to upgrade RHEL from version 7.9 to 8.X, and our infrastructure team is currently in the process of building a new set of servers running on RHEL 8.X. Consequently, I will need to migrate Splunk from the existing RHEL OS 7.9 to 8.X. Our Splunk architecture is on-premise and includes multiple Search Heads (SHs) in a cluster, Indexers in a cluster, and various other components. Has anyone here performed a migration from one OS to another version of the same OS before? Could I please get some guidelines on how to perform this, especially concerning clustered components?   I have checked the below steps: Stop Splunk Enterprise services Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Install Splunk Enterprise on the new host. Start Splunk Enterprise on the new instance. and specifically looking for the any additional steps that need to be performed, particularly for clustered components. Thank you. Kiran
You need to clarify the problem in search result as well as explain/illustrate your raw data.  "Can't populate result" can have a million different meanings.  Do you mean to say that you get a comple... See more...
You need to clarify the problem in search result as well as explain/illustrate your raw data.  "Can't populate result" can have a million different meanings.  Do you mean to say that you get a completely blank table, i.e., no results at all?  If this is the case, you probably do not have a field named correlationId in your raw data. Or do you mean values(content.File.fprocess_message) as ProcessMsg gives all null output? You cannot expect volunteers to read your mind.  Explain in no ambiguous terms. You speak about ProessMsg but it is not obvious whether a field named "ProcessMsg" exists in raw data, despite a suggestion of that coalesce function.  Again, you cannot just ask volunteers to speculate from your code (aka mind-reading) what raw data look like. Importantly, as @ITWhisperer  questioned, why go through all the trouble of coalescing if you are going to discard it, then use field name ProcessMsg to store output of a stats function, as indicated in values(content.File.fprocess_message) as ProcessMsg?  Most importantly, what is content.File.fprocess_message? Do you have evidence that this field even has value? Do you really mean   index="mulesoft" applicationName="ext" environment=DEV (*End of GL-import flow*) OR (message="GLImport Job Already Running, Please wait for the job to complete*") OR (message="process - No files found for import to ISG") |rename content.File.fstatus as Status |eval Status=case( like('Status' ,"%SUCCESS%"),"SUCCESS",like('Status',"%ERROR%"),"ERROR",like('message',"%process - No files found for import to ISG%"), "ERROR",like('message',"GLImport Job Already Running, Please wait for the job to complete"), "WARN") | eval ProcessMsg= coalesce(ProcessMsg,message) |stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(ProcessMsg) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" by correlationId |table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId    
Based on this search, I suspect that the raw software field is not JSON.  Regardless, @richgalloway's suggestion of mvexpand is sound.  But you must give examples of your software values; additionall... See more...
Based on this search, I suspect that the raw software field is not JSON.  Regardless, @richgalloway's suggestion of mvexpand is sound.  But you must give examples of your software values; additionally, you probably omitted max_match=0 from your real rex command.   I say this because by reverse engineering (something you should not force volunteers to do for you), I see two distinct possible format that software field can take to give out the table you illustrated.  Both possible formats require max_match=0, but each possible format requires a different approach to applying mvexpand.  Let me illustrate. (You should have illustrated your data in this manner.) If you do | table hostname software, you probably see the following in Splunk's statistics table However, never use a screenshot to illustrate data. (Screenshot is only useful when illustrating visualization anomalies.) The same display could come from two fundamentally different values. 1. When software is a multivalue field with distinct values like "cpe:/a:vendor1:product1:version1", "cpe:/a:vendor2:product2:version2", and so on, but all in a single event.  If this is the case, all you need is to apply mvexpand to software. ``` use when 'software' is multivalue ``` | mvexpand software | rex field=software max_match=0 "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor, Product, Version | dedup hostname, Vendor, Product, Version 2. When software is single-value, but multiline, like cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4 In this case, you need to first split software into single-line, multivalue before mvexpand.  Like this ``` use when 'software' is multiline ``` | eval software = split(software, " ") | mvexpand software | rex field=software max_match=0 "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor, Product, Version | dedup hostname, Vendor, Product, Version   In both cases, you can get the exact result you illustrated in OP.  But you must know which data format you have. Here are two data emulations for you to play with and compare with real data.  You can attach them to their corresponding mvexpand method to see how they turn into the desired tabulation. 1. multivalue 'software' | makeresults format=csv data="hostname hostname1" | eval software = split("cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4"," ") | append [makeresults format=csv data="hostname hostname2" | eval software = split("cpe:/a:vendor1:product2:version2 cpe:/a:vendor2:product4:version1 cpe:/a:vendor3:product3:version5 cpe:/a:vendor4:product6:version3"," ")] ``` emulates multivalue 'software' ``` 2. multline 'software' | makeresults format=csv data="hostname hostname1" | eval software = "cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4" | append [makeresults format=csv data="hostname hostname2" | eval software = "cpe:/a:vendor1:product2:version2 cpe:/a:vendor2:product4:version1 cpe:/a:vendor3:product3:version5 cpe:/a:vendor4:product6:version3"] ``` emulates multiline 'software' ``` Of course, reverse engineering (aka mind-reading), though laborious and generally loathed by volunteers, are often incorrect. There could be some other data format that I haven't considered that will give you the undesired output after rex; it is even possible some format will give you that undesirable output without max_match=0.  If so, only you can give us the real data format (anonymize as needed) to help yourself.
as you have to be explicit whit stanza names, you could put all this saved searches into a dedicated app, and the set the ttl in the default stanza [default]  for all searches in this app.  
Well, kind of. We use the following setup: A development instance with a home grown splunkgit app which allows us to push and pull apps to a git repo A ci/cd pipeline which runs an app trough app ... See more...
Well, kind of. We use the following setup: A development instance with a home grown splunkgit app which allows us to push and pull apps to a git repo A ci/cd pipeline which runs an app trough app inspect A cron job which puts a an app whit a new successfule build from the production branch on our deployer and thus deploys it on production Our workflow is: develop either With an IDE on the git repo and pull to the dev environment for testing GUI based development and push the results back from the development to the git repo Merge to the production branch pipeline builds it (app inspect) and it gets deployed   You have to be careful whit stuff in /local We have a push back corn job on our search heads which pushes the current apps back into another branch on the gi repo se we have always an valid backup and version history.
You need to show sample data that doesn't work with the case function fails to produce expected result, then the actual results.  The stats just makes troubleshooting more difficult.  But even if you... See more...
You need to show sample data that doesn't work with the case function fails to produce expected result, then the actual results.  The stats just makes troubleshooting more difficult.  But even if you want to include stats, you still need to show sample data.
Ok, then you should check on SCP that those events didn’t come to wrong index or those haven’t wrong time stamp. You should look also into future like latest is now+1 year or something.
This is a Splunk forum.  You need to describe in detail what your data source contains, and how an analyst will detect lateral movement without using Splunk, step by step.  Then, illustrate the desir... See more...
This is a Splunk forum.  You need to describe in detail what your data source contains, and how an analyst will detect lateral movement without using Splunk, step by step.  Then, illustrate the desired output.
Hi @Millowster, yu have to create a drilldown. Follow the documentation at https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/DrilldownIntro . It's also useful the Splunk Dashboard Examples Ap... See more...
Hi @Millowster, yu have to create a drilldown. Follow the documentation at https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/DrilldownIntro . It's also useful the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) to understand how to do this. Ciao. Giuseppe
Hi @HarishSamudrala , in addition to the hint of @richgalloway , remember that in interesting fields you see only the fields present in at least 20% of the events, probably these fields have a minor... See more...
Hi @HarishSamudrala , in addition to the hint of @richgalloway , remember that in interesting fields you see only the fields present in at least 20% of the events, probably these fields have a minor percentage. Instead running a search with these fields (e.g. field1=*), they are present at the 100% of the events. If you open the "All fields" panel, you can see fields present (by default) in more than 1% of the events and you can use also a filter to have all the fields without a threshold Ciao. Giuseppe
Hi @LearningGuy , As I said, using 1000 instead of 1 you have a string. I don't know why with a different number you have a different type. You eventually could to try: | makeresults | eval num =... See more...
Hi @LearningGuy , As I said, using 1000 instead of 1 you have a string. I don't know why with a different number you have a different type. You eventually could to try: | makeresults | eval num = 1 | eval var_type = typeof(num) | eval num2 = tostring(num,"commas")." " | eval var_type2 = typeof(num2) Ciao. Giuseppe
Hi @slearntrain, you have to use stats instead table: index="xyz" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd-35tr-6s9i4ewlp6h3" | rex field=_raw "\"APPID\"\:\s\"(?<ap... See more...
Hi @slearntrain, you have to use stats instead table: index="xyz" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd-35tr-6s9i4ewlp6h3" | rex field=_raw "\"APPID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | stats latest(eval(if(steptype="endNBflow"))) AS endNBflow latest(eval(if(steptype="end payload"))) AS endPayload BY appid flowname | eval diff=endPayload-endNBflow Ciao. Giuseppe
We have a use case where we need to calculate the time difference between the maximum infotime (steptype="endNBflow") and infotime where steptype is "end payload". This particular message has 16 even... See more...
We have a use case where we need to calculate the time difference between the maximum infotime (steptype="endNBflow") and infotime where steptype is "end payload". This particular message has 16 events comprising request and response flows. Request flow ends with "end Payload" and response flow ends with steptype "end NB Flow". I have the below query: index="xyz" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd-35tr-6s9i4ewlp6h3" | rex field=_raw "\"APPID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | sort infotime | table appid,flowname, steptype, infotime How can I retrieve the value what I am looking for. Any guidance here would be much appreciated.      
Thanks I tried both eval as well as fieldformat , still not getting the % appended.