All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Priya70 Could you please clarify what you are looking for ? 
@AANAND Could you please clarify what you mean by "export real event"? 
Hello @kiran_panchavat, Thanks for explaining this in very details, thanks for your time. Really appreciated.
@rahusri2  Install the forwarder credentials on individual forwarders in *nix From your Splunk Cloud Platform instance, go to Apps > Universal Forwarder. Click Download Universal Forwarder Credent... See more...
@rahusri2  Install the forwarder credentials on individual forwarders in *nix From your Splunk Cloud Platform instance, go to Apps > Universal Forwarder. Click Download Universal Forwarder Credentials. Note the location where the credentials package splunkclouduf.spl has been downloaded. Copy the file to a temporary directory, this is usually your "/tmp" folder. Install the splunkclouduf.spl app by entering the following in command line: $SPLUNK_HOME/bin/splunk install app /tmp/splunkclouduf.spl. When you are prompted for a user name and password, enter the user name and password for the Universal Forwarder. The following message displays if the installation is successful: App '/tmp/splunkclouduf.spl' installed. Restart the forwarder to enable the changes by entering the following command: ./splunk restart. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@rahusri2 Please check this documentation  https://docs.splunk.com/Documentation/Forwarder/9.4.0/Forwarder/ConfigSCUFCredentials  I hope this helps, if any reply helps you, you could add your upvot... See more...
@rahusri2 Please check this documentation  https://docs.splunk.com/Documentation/Forwarder/9.4.0/Forwarder/ConfigSCUFCredentials  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@rahusri2  1. Configure the `inputs.conf` file on your forwarders to monitor the `/var/log` directory and create an index on the indexers.  2. Download the `outputs.conf` file (Splunk Cloud Platfor... See more...
@rahusri2  1. Configure the `inputs.conf` file on your forwarders to monitor the `/var/log` directory and create an index on the indexers.  2. Download the `outputs.conf` file (Splunk Cloud Platform universal forwarder credentials package )from Splunk Cloud. - If there is no intermediate forwarder, you can directly apply the file to your universal forwarders. - If you are using an intermediate forwarder, download the file from Splunk Cloud and apply it to the heavy forwarder or intermediate forwarder. 3. If you have a deployment server, retrieve the `outputs.conf`(Splunk Cloud Platform universal forwarder credentials package) file from Splunk Cloud and push it to the forwarders using the deployment server. If you do not have a deployment server and prefer to implement the configuration directly, you can apply it manually to the forwarders. 4. Restart the Splunk instance to apply the changes. **Note:** 1. Ensure that the firewall rules between your on-premises environment and Splunk Cloud are properly configured. 2. A Splunk Cloud Platform receiving port is configured and enabled by default. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@rahusri2  When you work with forwarders to send data to Splunk Cloud Platform, you must download an app that has the credentials specific to your Splunk Cloud Platform instance. You install the for... See more...
@rahusri2  When you work with forwarders to send data to Splunk Cloud Platform, you must download an app that has the credentials specific to your Splunk Cloud Platform instance. You install the forwarder credentials app on your universal forwarder, heavy forwarder, or deployment server, and it lets you connect to Splunk Cloud Platform. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks. 
Hi @jpillai , two main things: 4/8 CPUs are very few for Indexers that should have at least 12 CPUs each one (if you don't have ES or ITSI). You should analyze your requirements, with special atte... See more...
Hi @jpillai , two main things: 4/8 CPUs are very few for Indexers that should have at least 12 CPUs each one (if you don't have ES or ITSI). You should analyze your requirements, with special attention to especially input next growth  and the number of scheduled searches and concurrent users, because usually it's used one IDX every 200 GB indexed (less if you have ES or ITSI), so you have too many IDXs. In addition you should analyze the performances of your disks (storage and system disks) to find the correct number of IDXs, because you need at least 800 IOPS better if more! About configurations, SHs usually require more CPUs than IDXs, So I'd use (if you don't have ES or ITSI): SH and IDX: 24/48 CPUs 64 GB RAM, HF, CM, SHC-D, MC and DS: 12/24 CPUs 64 GB RAM.  About the secondary site, as also @dural_yyz said, the secondary site, in the normal activity) is mainly used for the data replication, but you should analyze also the worst case, so I'd use the same configuration of the main site. Then, the Cluster Manager isn't required so performant and it must be only one in the cluster. In other words, you can have only one CM because the cluster continue to run also if the CM is down, eventually having a silent copy to turn on if the Primary Site down is longer that predicted. At least, I don't see in your infrastructure SHC-Deployer, Monitoring Console and Deployment Server for which you can apply the same considerations of the Cluster Manager. Ciao. Giuseppe
Yeah budget is a concern. Given the fact that the secondary site will only be used during a site1 failure, most of the hardware will just be sitting there without much activity except for may be inde... See more...
Yeah budget is a concern. Given the fact that the secondary site will only be used during a site1 failure, most of the hardware will just be sitting there without much activity except for may be indexers doing some replication. So I am trying to see how we can minimize hardware at site2. We probably be using site2 for indexing and searching for may be few hours over a period of months when site1 is down or under maintenance.
Few questions. 1. You are not using the input token form.Tail in your post processing search. Is it used in the base search? 2. What are you clicking to expect something to occur? The <set> stateme... See more...
Few questions. 1. You are not using the input token form.Tail in your post processing search. Is it used in the base search? 2. What are you clicking to expect something to occur? The <set> statement you have WILL set the input token in the display to be the clicked Tail value if you click the LEGEND of the column chart. 3. It looks like you are using base searches incorrectly. Base searches are NOT intended to hold RAW data, they are designed to hold aggregated data that has been transformed in some way - see this article https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/Savedsearches#Post-process_searches_2 You are likely to make performance worse by not using a transforming search in a base search and in any case, base searches have result limits. I am guessing your dashboard is something like this (note, please post using the code sample option in the menu <> when posting code. <form version="1.1" theme="light"> <label>Tail</label> <fieldset submitButton="false"> <input type="dropdown" token="Tail" searchWhenChanged="true"> <label>Tail</label> <choice value="*">All</choice> <choice value="1">1</choice> <choice value="2">2</choice> <choice value="3">3</choice> <default>*</default> </input> </fieldset> <row> <panel> <chart> <search> <query>| makeresults count=60 | eval Tail=random() % 3 | streamstats c | eval r=random() % 100 | eval Tail=if(r&lt;30,"*",Tail) | eval source=random() % 10 | search Tail=$Tail$ | chart count over source by Tail</query> <earliest>-30m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Tail">$click.name2$</set> </drilldown> </chart> </panel> </row> <row> <panel> <html>$form.Tail$</html> </panel> </row> </form> You can see that when you click the legend this will change the input to what you have clicked if you click the legend. Can you clarify exactly what is NOT working and what you are actually doing that does NOT work
thanks -that works now
Hello, I have a requirement to collect and monitor logs from several machines running in a private network. These machines are generating logs that need to be sent to Splunk Cloud for monitoring. ... See more...
Hello, I have a requirement to collect and monitor logs from several machines running in a private network. These machines are generating logs that need to be sent to Splunk Cloud for monitoring. Here's what I've done so far: Installed Universal Forwarder: I have installed the Splunk Universal Forwarder on each machine that generates logs. Configured Forwarding: I used the command ./splunk add forward-server prd-xxx.splunkcloud.com:9997 to set the server address for forwarding logs to Splunk Cloud. Set Up Monitoring: I added the directory to be monitored with the command ./splunk add monitor /var/log. However, I'm unable to see any logs on the Splunk Cloud dashboard at "prd-xxx.splunkcloud.com:9997". I have a question regarding port 9997; it seems that this port should be open on Splunk Cloud, but I don't see an option to configure this in Splunk Cloud as there is no "Settings > Forwarding and Receiving > Receive data" section available. How can I resolve this issue and ensure that logs are properly sent to and visible on Splunk Cloud? Thanks.
If you want to end up with several multivalue fields that are correlated with each other, you can't use stats values() as the output from a values() aggregation is always in sorted order. There are ... See more...
If you want to end up with several multivalue fields that are correlated with each other, you can't use stats values() as the output from a values() aggregation is always in sorted order. There are a number of options 1. Use stats list() which will record the item for EVERY event but the order is preserved, but of course if you have duplicates for the same user on the same _time, you will have multiple entries. Note that list() has a maximum list length of 100 items 2. Make a combination field of the items you want to end up with and use stats values(new_field) and then split them out again, e.g. like this   ... | eval _tmp=SourceUser."###".Login_Time | stats values(_tmp) as _tmp count by _time host | rex field=_tmp max_match=0 "(?<User>.*)###(?<VPN_Login_Time>.*)" | fields - _tmp   3. Do this to handle the potential duplicate logins on the same _time for the same user   ... | stats values(Login_Time) as VPN_Login_Time count by _time host SourceUser | stats list(*) as * sum(count) as count by _time host   so include the SourceUser initially then use stats list finally Hope this helps    
@zksvc which version of Splunk are you using?
Hi @avikc100 , otherwise the solution from @ITWhisperer you could use the rex command: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/log... See more...
Hi @avikc100 , otherwise the solution from @ITWhisperer you could use the rex command: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/SmartIST.log" | rex field=SmartISTINTERFACE "^(?<SmartISTINTERFACE>[^ ]+)" | stats count by SmartISTINTERFACE Ciao. Giuseppe
Dear All, Kindly suggest , How to sort data in stats command output as per event time . Example:  Requirement : VPN login details as per source user In last one hour. SPL Query : index="network"... See more...
Dear All, Kindly suggest , How to sort data in stats command output as per event time . Example:  Requirement : VPN login details as per source user In last one hour. SPL Query : index="network" sourcetype=vpn | eval "Login_Time" = strftime(_time, "%m/%d/%Y %I:%M:%S %p") | stats values(SourceUser)as User  values(Login_Time) as VPN_Login_Time count by _time host Date Host User VPN Login Time Count 1/7/2025 0:00 10.10.8.45 Amar Rajesh Zainab 01/07/2025 06:01:25 AM 01/7/2025  06:30:21 AM 01/7/2025  06:50:49 AM 3 challenge in above example output,  Amar was logged in at 01/7/2025  06:30:21AM and  Zainab was logged in at 01/07/2025 06:01:25 AM but in output result user were sorted with alphabetical order and Login time were  sorted in descending order.  And User field can not be added as third field in by expression .
Im trying to set and unset a filter (manage token) based on click.name2 - if form.Tail=* i want to put click.name2, and if it's not * i want to put * (by that unsetting the token) I've tried 3 appr... See more...
Im trying to set and unset a filter (manage token) based on click.name2 - if form.Tail=* i want to put click.name2, and if it's not * i want to put * (by that unsetting the token) I've tried 3 approaches and all had prolems. this is the chart config: <chart> <title>amount of warning per file per Tail</title> <search base="basesearch"> <query>|search | search "WARNNING: " | rex field=_raw "WARNNING: (?&lt;warnning&gt;(?s:.*?))(?=\n\d{5}|$)" | search warnning IN $warning_type$ | search $project$ | search $platform$ | chart count over source by Tail</query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Tail">$click.name2$</set> </drilldown> </chart> when i try something like that:    <drilldown> <eval token="form.Tail">if("$form.Tail$"==* OR "$form.Tail$"=="ALL", "$click.name2$",*)</eval> </drilldown> nothing happens (i also tried something simpler like <eval token="form.Tail">if($form.Tail$ != $click.name2$, 111,222)</eval> but still nothing happens)   when i try this: <eval token="form.Tail">if("$form.Tail$" == "$click.name2$", "*", "$click.name2$")</eval>   it just puts $click.name2$ in the token when i try:         <drilldown> <condition match="form.Tail==*"> <set token="form.Tail">$click.name2$</set> </condition> <condition match="form.Tail!=*"> <set token="form.Tail">*</set> </condition> </drilldown> instead of managing token in the dashboard it open as search on the current tab by there exiting the dashboard. what am i doing wrong here? thanks in advanced to all helpers
facing same issue
The problem arises at least in part from missing header in event data.  If the illustrated raw event is a complete event in _raw, this is what you can do to add that header.  No need for rex.   ... See more...
The problem arises at least in part from missing header in event data.  If the illustrated raw event is a complete event in _raw, this is what you can do to add that header.  No need for rex.   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | eval _raw = "Severity,Hostname,CertIssuer,FilePath,Status,ExpiryDate " . replace(_raw, "\|", ",") | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   Here is a complete emulation:     | makeresults | eval _raw="ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03" ``` the above emulates index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log ``` | head 1 | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|\s]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   Output is Severity Hostname CertIssuer FilePath Status ExpiryDate ALERT appu2.de.com rootca12 /applications/hs_cert/cert/live/h_hcm.jks Expired 2020-10-18 WARNING appu2.de.com key /applications/hs_cert/cert/live/h_hcm.jks Expiring Soon 2025-06-14 INFO appu2.de.com rootca13 /applications/hs_cert/cert/live/h_core.jks Valid 2026-10-18 ALERT appu2.de.com rootca12 /applications/hs_cert/cert/live/h_core.jks Expired 2020-10-18 WARNING appu2.de.com key /applications/hs_cert/cert/live/h_core.jks Expiring Soon 2025-03-22 ALERT appu2.de.com key /applications/hs_cert/cert/live/h_mq.p12 Expired 2025-01-03  
The core requirement for using a map is: For example, if the start date is T0, the end date is TD, the cycle is N days, and the trigger days are M days, the system should calculate whether each user h... See more...
The core requirement for using a map is: For example, if the start date is T0, the end date is TD, the cycle is N days, and the trigger days are M days, the system should calculate whether each user has accessed the same sensitive account more than M times continuously within T0 to T0+N days, and then calculate the number of visits from T1 to T0+1+N days, T0+2 to T0+2+N days... T0+D to T0+D+N days (each user who accesses the same sensitive account multiple times a day is recorded as 1 time and does not accumulate between different users). During this period, detailed information will be displayed every time there are more than M consecutive visits. Click on the specific number of visits to view detailed information for each visit record.