All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Wait a second. You're looking for events on the HF? It doesn't (at least shouldn't) work that way. A forwarder, as the name says, is a component which forwards data from input(s) to output(s). If pro... See more...
Wait a second. You're looking for events on the HF? It doesn't (at least shouldn't) work that way. A forwarder, as the name says, is a component which forwards data from input(s) to output(s). If properly configured, HF should not index events locally.
Hi here is the new MASA diagram where you could look where to put those and in which server https://splunk-usergroups.slack.com/files/U0483CQG4/F06PKREDNLW/masa.pdf?origin_team=T047WPASC&origin_chan... See more...
Hi here is the new MASA diagram where you could look where to put those and in which server https://splunk-usergroups.slack.com/files/U0483CQG4/F06PKREDNLW/masa.pdf?origin_team=T047WPASC&origin_channel=Psearch r. Ismo
Hi here is some old answers about this: https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-how-to-migrate-large/m-p/601048#M11615 https://community.splunk.com/t5/S... See more...
Hi here is some old answers about this: https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-how-to-migrate-large/m-p/601048#M11615 https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538069#M4823 https://community.splunk.com/t5/Installation/What-are-the-steps-for-Splunk-enterprise-migration-physical-to/m-p/648565/highlight/true when something goes wrong. r. Ismo
Maybe you should check this? [udp://<remote server>:<port>] * Similar to the [tcp://] stanza, except that this stanza causes the Splunk instance to listen on a UDP port. * Only one stanza per port... See more...
Maybe you should check this? [udp://<remote server>:<port>] * Similar to the [tcp://] stanza, except that this stanza causes the Splunk instance to listen on a UDP port. * Only one stanza per port number is currently supported. * Configures the instance to listen on a specific port. * If you specify <remote server>, the specified port only accepts data from that host. * If <remote server> is empty - [udp://<port>] - the port accepts data sent from any host. * The use of <remote server> is not recommended. Use the 'acceptFrom' setting, which supersedes this setting. * Generates events with source set to udp:portnumber, for example: udp:514 * If you do not specify a sourcetype, generates events with sourcetype set to udp:portnumber. Even the example shows that : is not mandatory if you have only port definition, I would like to test it like [udp://:1514] to ensure that this is not an issue. 
I see that index in Heavy Forwarder is empty
And at same time it convert that field (result of case) to multivalue field which contains both those values. As @yuanliu said, you must provide sample data which produce that "error", if you want th... See more...
And at same time it convert that field (result of case) to multivalue field which contains both those values. As @yuanliu said, you must provide sample data which produce that "error", if you want that we can help you.
Hi Team, Thanks for being there! I hope you all are doing great! I was working on the requirement to install and monitor Kubernetes using AppDyanamics  I have gone through the video from Cisco ... See more...
Hi Team, Thanks for being there! I hope you all are doing great! I was working on the requirement to install and monitor Kubernetes using AppDyanamics  I have gone through the video from Cisco U https://www.youtube.com/watch?v=RTzMJxzSa9I But I have a question. Do we not need a cluster agent as I don't seem to have used or taken the name of a cluster agent in the process? Could you help me with this?
Please share the searches which are failing
Here are the setting for props.conf   SHOULD_LINEMERGE=false #Should always be false LINE_BREAKER=([\r\n]+)IP #Adds IP to the line breaking (If all lines starts with IP) NO_BINARY_CHECK=tru... See more...
Here are the setting for props.conf   SHOULD_LINEMERGE=false #Should always be false LINE_BREAKER=([\r\n]+)IP #Adds IP to the line breaking (If all lines starts with IP) NO_BINARY_CHECK=true TIME_FORMAT=%e-%m-%y %T #Sets the time format TIME_PREFIX=At: #Use time found after the At: MAX_TIMESTAMP_LOOKAHEAD=20 #Do not search more tha needed for the time  
@gcuselloAs previously stated, I implemented the setting SHOULD_LINEMERGE = false in Splunk Cloud SH, which successfully resolved the issue. However, the logs contain HTML events, which are now being... See more...
@gcuselloAs previously stated, I implemented the setting SHOULD_LINEMERGE = false in Splunk Cloud SH, which successfully resolved the issue. However, the logs contain HTML events, which are now being treated as individual events, resulting in difficulties extracting the desired fields. Could you please advise on how we can address this?
Hi, I have a requirement to upgrade RHEL from version 7.9 to 8.X, and our infrastructure team is currently in the process of building a new set of servers running on RHEL 8.X. Consequently, I will n... See more...
Hi, I have a requirement to upgrade RHEL from version 7.9 to 8.X, and our infrastructure team is currently in the process of building a new set of servers running on RHEL 8.X. Consequently, I will need to migrate Splunk from the existing RHEL OS 7.9 to 8.X. Our Splunk architecture is on-premise and includes multiple Search Heads (SHs) in a cluster, Indexers in a cluster, and various other components. Has anyone here performed a migration from one OS to another version of the same OS before? Could I please get some guidelines on how to perform this, especially concerning clustered components?   I have checked the below steps: Stop Splunk Enterprise services Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Install Splunk Enterprise on the new host. Start Splunk Enterprise on the new instance. and specifically looking for the any additional steps that need to be performed, particularly for clustered components. Thank you. Kiran
You need to clarify the problem in search result as well as explain/illustrate your raw data.  "Can't populate result" can have a million different meanings.  Do you mean to say that you get a comple... See more...
You need to clarify the problem in search result as well as explain/illustrate your raw data.  "Can't populate result" can have a million different meanings.  Do you mean to say that you get a completely blank table, i.e., no results at all?  If this is the case, you probably do not have a field named correlationId in your raw data. Or do you mean values(content.File.fprocess_message) as ProcessMsg gives all null output? You cannot expect volunteers to read your mind.  Explain in no ambiguous terms. You speak about ProessMsg but it is not obvious whether a field named "ProcessMsg" exists in raw data, despite a suggestion of that coalesce function.  Again, you cannot just ask volunteers to speculate from your code (aka mind-reading) what raw data look like. Importantly, as @ITWhisperer  questioned, why go through all the trouble of coalescing if you are going to discard it, then use field name ProcessMsg to store output of a stats function, as indicated in values(content.File.fprocess_message) as ProcessMsg?  Most importantly, what is content.File.fprocess_message? Do you have evidence that this field even has value? Do you really mean   index="mulesoft" applicationName="ext" environment=DEV (*End of GL-import flow*) OR (message="GLImport Job Already Running, Please wait for the job to complete*") OR (message="process - No files found for import to ISG") |rename content.File.fstatus as Status |eval Status=case( like('Status' ,"%SUCCESS%"),"SUCCESS",like('Status',"%ERROR%"),"ERROR",like('message',"%process - No files found for import to ISG%"), "ERROR",like('message',"GLImport Job Already Running, Please wait for the job to complete"), "WARN") | eval ProcessMsg= coalesce(ProcessMsg,message) |stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(ProcessMsg) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" by correlationId |table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId    
Based on this search, I suspect that the raw software field is not JSON.  Regardless, @richgalloway's suggestion of mvexpand is sound.  But you must give examples of your software values; additionall... See more...
Based on this search, I suspect that the raw software field is not JSON.  Regardless, @richgalloway's suggestion of mvexpand is sound.  But you must give examples of your software values; additionally, you probably omitted max_match=0 from your real rex command.   I say this because by reverse engineering (something you should not force volunteers to do for you), I see two distinct possible format that software field can take to give out the table you illustrated.  Both possible formats require max_match=0, but each possible format requires a different approach to applying mvexpand.  Let me illustrate. (You should have illustrated your data in this manner.) If you do | table hostname software, you probably see the following in Splunk's statistics table However, never use a screenshot to illustrate data. (Screenshot is only useful when illustrating visualization anomalies.) The same display could come from two fundamentally different values. 1. When software is a multivalue field with distinct values like "cpe:/a:vendor1:product1:version1", "cpe:/a:vendor2:product2:version2", and so on, but all in a single event.  If this is the case, all you need is to apply mvexpand to software. ``` use when 'software' is multivalue ``` | mvexpand software | rex field=software max_match=0 "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor, Product, Version | dedup hostname, Vendor, Product, Version 2. When software is single-value, but multiline, like cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4 In this case, you need to first split software into single-line, multivalue before mvexpand.  Like this ``` use when 'software' is multiline ``` | eval software = split(software, " ") | mvexpand software | rex field=software max_match=0 "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor, Product, Version | dedup hostname, Vendor, Product, Version   In both cases, you can get the exact result you illustrated in OP.  But you must know which data format you have. Here are two data emulations for you to play with and compare with real data.  You can attach them to their corresponding mvexpand method to see how they turn into the desired tabulation. 1. multivalue 'software' | makeresults format=csv data="hostname hostname1" | eval software = split("cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4"," ") | append [makeresults format=csv data="hostname hostname2" | eval software = split("cpe:/a:vendor1:product2:version2 cpe:/a:vendor2:product4:version1 cpe:/a:vendor3:product3:version5 cpe:/a:vendor4:product6:version3"," ")] ``` emulates multivalue 'software' ``` 2. multline 'software' | makeresults format=csv data="hostname hostname1" | eval software = "cpe:/a:vendor1:product1:version1 cpe:/a:vendor2:product2:version2 cpe:/a:vendor3:product3:version3 cpe:/a:vendor4:product4:version4" | append [makeresults format=csv data="hostname hostname2" | eval software = "cpe:/a:vendor1:product2:version2 cpe:/a:vendor2:product4:version1 cpe:/a:vendor3:product3:version5 cpe:/a:vendor4:product6:version3"] ``` emulates multiline 'software' ``` Of course, reverse engineering (aka mind-reading), though laborious and generally loathed by volunteers, are often incorrect. There could be some other data format that I haven't considered that will give you the undesired output after rex; it is even possible some format will give you that undesirable output without max_match=0.  If so, only you can give us the real data format (anonymize as needed) to help yourself.
as you have to be explicit whit stanza names, you could put all this saved searches into a dedicated app, and the set the ttl in the default stanza [default]  for all searches in this app.  
Well, kind of. We use the following setup: A development instance with a home grown splunkgit app which allows us to push and pull apps to a git repo A ci/cd pipeline which runs an app trough app ... See more...
Well, kind of. We use the following setup: A development instance with a home grown splunkgit app which allows us to push and pull apps to a git repo A ci/cd pipeline which runs an app trough app inspect A cron job which puts a an app whit a new successfule build from the production branch on our deployer and thus deploys it on production Our workflow is: develop either With an IDE on the git repo and pull to the dev environment for testing GUI based development and push the results back from the development to the git repo Merge to the production branch pipeline builds it (app inspect) and it gets deployed   You have to be careful whit stuff in /local We have a push back corn job on our search heads which pushes the current apps back into another branch on the gi repo se we have always an valid backup and version history.
You need to show sample data that doesn't work with the case function fails to produce expected result, then the actual results.  The stats just makes troubleshooting more difficult.  But even if you... See more...
You need to show sample data that doesn't work with the case function fails to produce expected result, then the actual results.  The stats just makes troubleshooting more difficult.  But even if you want to include stats, you still need to show sample data.
Ok, then you should check on SCP that those events didn’t come to wrong index or those haven’t wrong time stamp. You should look also into future like latest is now+1 year or something.
This is a Splunk forum.  You need to describe in detail what your data source contains, and how an analyst will detect lateral movement without using Splunk, step by step.  Then, illustrate the desir... See more...
This is a Splunk forum.  You need to describe in detail what your data source contains, and how an analyst will detect lateral movement without using Splunk, step by step.  Then, illustrate the desired output.
Hi @Millowster, yu have to create a drilldown. Follow the documentation at https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/DrilldownIntro . It's also useful the Splunk Dashboard Examples Ap... See more...
Hi @Millowster, yu have to create a drilldown. Follow the documentation at https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/DrilldownIntro . It's also useful the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) to understand how to do this. Ciao. Giuseppe