All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to automate a msi install for Universal Forwarder 9.0.4 I can not get this to run with the INSTALLDIR in it. If I remove INSTALLDIR, the msi runs with no issues and installs to t... See more...
Hello, I am trying to automate a msi install for Universal Forwarder 9.0.4 I can not get this to run with the INSTALLDIR in it. If I remove INSTALLDIR, the msi runs with no issues and installs to the default location. Every time I add the INSTALLDIR to the install windows will throw up the generic msi box. Per the documents this should work unless I am missing something? I am running this as an admin  https://docs.splunk.com/Documentation/Forwarder/9.0.4/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller   msiexec.exe /i "splunkforwarder-9.0.4-de405f4a7979-x64-release.msi" INSTALLDIR="e:\Program Files\SplunkUniversalForwarder" AGREETOLICENSE="Yes" WINEVENTLOG_APP_ENABLE=1 WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 WINEVENTLOG_FWD_ENABLE=1 WINEVENTLOG_SET_ENABLE=1 PERFMON=cpu,memory,network,diskspace SPLUNKUSERNAME=splunk SPLUNKPASSWORD="secret" /quiet  
HI Team, Two of my indexer got disconnected with Search head. When i tried to add them in search peer that is asking for remote username and password which i dont have. So can we fix it so two of ... See more...
HI Team, Two of my indexer got disconnected with Search head. When i tried to add them in search peer that is asking for remote username and password which i dont have. So can we fix it so two of the indexers gets connect again.
We have installed the Forescout Add-on and App for eyeExtend. The Forecout eyeExtend license is also installed everywhere on the Forescout appliances. We are trying to use the HEC to which we created... See more...
We have installed the Forescout Add-on and App for eyeExtend. The Forecout eyeExtend license is also installed everywhere on the Forescout appliances. We are trying to use the HEC to which we created a token, input that into Forescout, and the info needed from Forescout into Splunk. Forescout can ping the the indexer, but getting a 500 error when we try to test. Both products are on prem (spkunk Enterprise on prem and the appliances) and mainly in the same data center so it doesn't make sense since it can ping the server. Also no firewll traffic is traversing between the two since it is the same data center. Thoughts? We have yet to get a response from Forescout support. 
Hi Team, We have configured the DEV/TEST license in Splunk On-Prem standalone server. Its working fine on that day. But after onboarding logs the server, we are facing issue. when we click on s... See more...
Hi Team, We have configured the DEV/TEST license in Splunk On-Prem standalone server. Its working fine on that day. But after onboarding logs the server, we are facing issue. when we click on settings==>Licensing==>getting the error like 500 internal server we are unable to see the License window. After onboarding logs also its showing "0%" license consuming out of 50GB in Monitoring Console. Checked the Splunkd.log getting the error. Please find the Screen shots FYR. Internal server error splunkd.log error web.conf    
I have sourcetype=apple and sourcetype=orange. They are both network related sourcetypes. Is there an automated way of finding redundancies in the two (or more) sourcetypes? For instance apple has "s... See more...
I have sourcetype=apple and sourcetype=orange. They are both network related sourcetypes. Is there an automated way of finding redundancies in the two (or more) sourcetypes? For instance apple has "sip" and orange has "sourceip". I want to automate the discovery of the redundant fields. While I don't know how to do this, I had considered flipping the values and fields such that a result might look like  value field 12.23.34.45 apple-sip orange-sourceip    However, I am open to anything that accomplishes the goal of auto-discovering redundant fields across multiple sourcetypes. I thought foreach * might possible do the trick as well. 
Needing some help building a dashboard that will display the Dat Set Version of all Linux machines on the network. Any feedback is appreciated.
Hi Team, I am currently using below query:     index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex "\[(?<thread>Thread[^\]]+)\]... See more...
Hi Team, I am currently using below query:     index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex "\[(?<thread>Thread[^\]]+)\]" | transaction thread startswith=" Started ASSOCIATION process for" endswith="Successfully completed ASSOCIATION process" | timechart avg(duration) as duration span=1d |eval duration=floor(duration/60) | sort _time     I am able to see last 7days data individually I want one panel where I can check for average for last 7 days like average of all that 7 days time. Can someone guide.
I have created one linux uptime or reboot alert if the server gets rebooted it will trigger the alert if the uptime<=600 secs . Now , I have one server where the uptime value is 0sec It has created... See more...
I have created one linux uptime or reboot alert if the server gets rebooted it will trigger the alert if the uptime<=600 secs . Now , I have one server where the uptime value is 0sec It has created an alert but the server was up from 90 days as per Splunk logs. The data directly comes from the server there is no script inlined.  
Hi All, I am trying to pass a token link to another dashboard panel. My requirement is when I pass Windows Server Token, it must display Windows metrics and Vice Versa. Both the OS SPL queries are d... See more...
Hi All, I am trying to pass a token link to another dashboard panel. My requirement is when I pass Windows Server Token, it must display Windows metrics and Vice Versa. Both the OS SPL queries are different and at one point it can display the metrics from one host only. Can anyone tell me how to achieve this in one PANEL ? Windows Host SPL | mstats min("Processor.%_Idle_Time") as val WHERE (`itsi_entity_type_windows_metrics_indexes`) AND host=$host$ span=1m BY "instance" | eval instance="CPU: ".instance | eval val=100-val | xyseries _time instance val   Unix Host SPL |mstats max(ps_metric.pctCPU) as val WHERE (`itsi_entity_type_ta_nix_metrics_indexes`) AND host=$host$ span=1m BY USER | eval instance="User: ".USER | xyseries _time instance val    
Hi,   i have the following case, An operation has multiple events and every event of an operation is related by field PushId. Below the events of one operation (underlined necessary fields) 13:1... See more...
Hi,   i have the following case, An operation has multiple events and every event of an operation is related by field PushId. Below the events of one operation (underlined necessary fields) 13:14:03,838;04-08-2023 13:14:03.838;;SMS;33786543;iOS;001452c7-9f80-4215-87b5-a20b00e3c4fd;;;0;OK 13:09:31,150;04-08-2023 13:09:31.133;;SEND_PUSH_APNS;33786543;ios;001452c7-9f80-4215-87b5-a20b00e3c4fd;This is a Title;This is the Body.;17;OK;null 13:09:31,131;04-08-2023 13:09:31.102;;NON_SILENT_PUSH;33786543;;001452c7-9f80-4215-87b5-a20b00e3c4fd;This is a Title;This is the Body..;29;OK;null 01:23:52,652;04-08-2023 01:23:52.519;10.129.150.86;SEND_PUSH_REQUEST;33786543;ios;001452c7-9f80-4215-87b5-a20b00e3c4fd;This is a Title;This is the Body.;133;OK;29a6c9e8-d731-47b4-81b9-6748596c4138 I want to count by every PushID (001452c7-9f80-4215-87b5-a20b00e3c4fd in this case) the Number of requests equal to SEND_PUSH_REQUEST, ACK and SMS. With this counts do a table including Title and Body, For this particular case, should look like, Title | Body | Requests (SEND_PUSH_REQUEST) | ACK | SMS This is a Title | This is the Body | 1 | 0 | 1   for counts i'm ok with below queries index=vfpt_idx_gssm_mbe PushId=001452c7-9f80-4215-87b5-a20b00e3c4fd | stats count(eval(Request="SEND_PUSH_REQUEST")) as request , dc(eval(Request="ACK")) as ack , count(eval(Request="SMS")) as sms by PushId, Title, Body | table Title Body request ack sms the problem is isn't show the correct count for SMS because is Title and Body is not on all events index=vfpt_idx_gssm_mbe PushId=001452c7-9f80-4215-87b5-a20b00e3c4fd | stats count(eval(Request="SEND_PUSH_REQUEST")) as request , dc(eval(Request="ACK")) as ack , count(eval(Request="SMS")) as sms by PushId | table Title Body request ack sms With this query, counts are ok but table is not showing Title and Body.   I'm stuck here and i don't know how to relate the counts with fields Title and Body on a table.   Thank you
Hi Iam looking to create an if statement:  if value  contains part of another value  it changes it too another value. for example contains x its true if not its false.     | eval error=if(in(status... See more...
Hi Iam looking to create an if statement:  if value  contains part of another value  it changes it too another value. for example contains x its true if not its false.     | eval error=if(in(status, "error", "failure", "severe"),"true","false") I also want it for many values.
Hello to everyone. After reading the post linked down below, I tried to use the same approach for sourcetypes from Windows hosts. https://community.splunk.com/t5/Splunk-Search/Applying-Field-Extr... See more...
Hello to everyone. After reading the post linked down below, I tried to use the same approach for sourcetypes from Windows hosts. https://community.splunk.com/t5/Splunk-Search/Applying-Field-Extractions-to-Several-Sourcetypes/m-p/118083 To do it I used this regex like a sourcetype: (?::){0}*WinEventLog But it didn't work, and I think that I misunderstood something. Is anyone able to explain to me what was wrong?  
I am ingesting advanced hunting logs and I have a main dashboard where I present the number of events per Event Category as single numbers. I want to be able to track the changes in the number of eve... See more...
I am ingesting advanced hunting logs and I have a main dashboard where I present the number of events per Event Category as single numbers. I want to be able to track the changes in the number of events. For instance, if Monday has 1,000,000 events but Tuesday has 2,000,000 events then the number of events has increased by 1,000,000. How can I work out the difference and display this on the main dashboard. Any help is greatly appreciated
Hello, We've an application with logs looks like following.  See below for some sample cases of single connection. With some characteristics of the logs: * same ID (conn) for the same connection. ... See more...
Hello, We've an application with logs looks like following.  See below for some sample cases of single connection. With some characteristics of the logs: * same ID (conn) for the same connection. * the search is to check login result (BIND and RESULT pair) * same connection can have more than 1 login operation at the same time or different time. * events of different connection are interleaving * the only assuming is the RESULT event comes after the BIND event of same login   We use transaction to do that but also want to see if it's possible to use more efficient way like using streamstats/eventstats (still studying) as the log size is large. Would anyone please shed some light ? Thanks a lot. Best Rgds /stwong   ====== basic case [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123456,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=123456,dc=mydomain,dc=hk" ====== [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123456,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - RESULT err=49 tag=97 nentries=0 etime=0 [04/Aug/2023:15:26:22 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123456,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:22 +0800] conn=3497880 op=0 msgId=1 - RESULT err=49 tag=97 nentries=0 etime=0 ====== can only assume the first RESULT is for the first BIND operation. [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123456,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123457,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - RESULT err=49 tag=97 nentries=0 etime=0 [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - RESULT err=48 tag=97 nentries=0 etime=0 ====== [04/Aug/2023:15:26:21 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123456,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:21 +0800] conn=3498439 op=1 msgId=2 - SRCH base="dc=mydomain,dc=hk" scope=2 filter="(myId=a12345)" attrs="uidNumber" [04/Aug/2023:15:26:23 +0800] conn=3498439 op=0 msgId=1 - RESULT err=0 tag=97 nentries=0 etime=0 [04/Aug/2023:15:26:22 +0800] conn=3497880 op=0 msgId=1 - BIND dn="uid=123457,dc=mydomain,dc=hk" method=128 version=3 [04/Aug/2023:15:26:22 +0800] conn=3497880 op=0 msgId=1 - RESULT err=49 tag=97 nentries=0 etime=0 [04/Aug/2023:15:26:23 +0800] conn=3497880 op=0 msgId=1 - RESULT err=49 tag=97 nentries=0 etime=0  
I have the following: <form theme="dark"> <label>gongya-UWIT-NAT-DashBoard</label> <fieldset submitButton="true" autoRun="false"> <input type="text" token="var_OrigIP" searchWhenChanged="true"> <lab... See more...
I have the following: <form theme="dark"> <label>gongya-UWIT-NAT-DashBoard</label> <fieldset submitButton="true" autoRun="false"> <input type="text" token="var_OrigIP" searchWhenChanged="true"> <label>Original IP</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="var_dstIP" searchWhenChanged="true"> <label>Destination IP</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="radio" token="SPRT"> <label>$var_OrigIP$</label> <choice value="index=net_uwit_nat srcip=$var_OrigIP$ | table srcip, transip, dstip, dstport | stats count(dstip) as Count by srcip, transip, dstip, dstport">Yes</choice> <choice value="index=net_uwit_nat srcip=172.21.244.25">No</choice> </input> </fieldset> <row> <panel> <table> <search> <query>$SPRT$ </query> <earliest>-240m@m</earliest> <latest>now</latest> </search> <option name="count">50</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> $var_OrigIP$ does not pick up the value. Any ideas ? thanks !!
I have some questions regarding data trim
I want to rename row value by data case. (It is line chart) The line chart row name changed  by token $value$ if value is "iron" -> row must rename as "metal" -> and graph line become "black" if v... See more...
I want to rename row value by data case. (It is line chart) The line chart row name changed  by token $value$ if value is "iron" -> row must rename as "metal" -> and graph line become "black" if value is "steak" -> row must rename as "food". -> and graph line become "red" so I wrote the code like this, but it's not work at all. <search> <query> ... |eval dt = case("$value$" == "iron", "metal", 1=1, "food") |rename "row 1" as dt ... </query> </search> <option name="charting.fieldColors">{"metal": 0xffffff, "food" : 0xFF0000}</option>  How could I solve this problem?
I have a set of data that I upload into Splunk every morning as a .csv file because the tool doesn't feed the particular data automatically.  It is a list of agents installed on assets.  I use a save... See more...
I have a set of data that I upload into Splunk every morning as a .csv file because the tool doesn't feed the particular data automatically.  It is a list of agents installed on assets.  I use a savedsearch to query the data because I latest() and stats every field to make sure it is the latest record in the database, it's a pretty big query.  I am interested in forcing the data to look like it was just ingested every time it is queried (when the savedsearch is executed).  I tried adding a field to the savedsearch called _time (also tried timestamp) and setting it now(), that worked for the display but my records are still going stale so my assumption is that Splunk is using the original timestamp for these records (and I am ASSUMING I cannot change that).  While the query works for the 24 hour timeframe or if I set the timeframe to the current day, all is good (until the day passes).  But if I shorten the timeframe to last 15 minutes, last 4 hours, last 60 minutes, I get nothing from the query, which makes sense because that data was timestamped outside the range.  But I need the timstamp to look like right now. The data is timestamped when I upload it using the current date/time.  I want the last upload to ALWAYS be the current set of records (including the time).  It should work with any timeframe.  In a perfect world I would like the timestamp to be when the query is executed, so it is always now(), making the data 'look' like it is fresh.  Is there a way to do this?
Hello,   I am developing a new app for Splunk (enterprise and cloud platform). I would like to understand how can I publish this app as "Splunk supported"?  Any inputs highly appreciated with karm... See more...
Hello,   I am developing a new app for Splunk (enterprise and cloud platform). I would like to understand how can I publish this app as "Splunk supported"?  Any inputs highly appreciated with karma   Thanks, Mohsin
Hello All, I would like some suggestions. I am trying to search the Cisco ASA sourcetype in Splunk for the current users that are logged in to an ASA. I am trying to use "last 24 hours" as the start... See more...
Hello All, I would like some suggestions. I am trying to search the Cisco ASA sourcetype in Splunk for the current users that are logged in to an ASA. I am trying to use "last 24 hours" as the start time range. I am trying to count login message (113004) and compare with logout message (722023). There are other message ids for logout:716002, 113019. It seems that I need the "latest" login. The user can log in and log out - I can get pairs by user for a login/logout. If there is a pair like that I can assume the user is not logged in. The challenge comes with the fact the user can have logged in and out multiple times. My theory is that if a user has more logins that logouts (assuming there is a login later than a log out). this should be a user that is logged on. The trick seems to be finding the latest log in per user, and counting values in the "message_id" field. Any suggestions here? I  can check users on the ASA using the "show vpn-session summary" command, but that number almost never matches my searches. I can use any suggestion. Thanks eholz1