All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless a... See more...
Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless at this point.  Here's the error: Error in 'where' command: The operator at '10.132.195.72' is invalid. And here's the search: index=ipam sourcetype=data earliest=-48h latest=now() | where cidrmatch(name, IP_Address) [ search index=networksessions sourcetype=microsoft:dhcp (Description=Renew OR Description=Assign OR Description=Conflict) earliest=-15min latest=now() | head 1 | return ($IP_Address) ]  
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | so... See more...
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | sort 10 -Complaints | table SendersAddress, SendersMNO, Complaints   The command work; however, the result column for SendersMNO is not producing any results, any reason why? All help is appreciated.
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, an... See more...
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, and you see large gaps between events (usually ~10 minutes, the time it takes for it to roll over a file). The 'Source' also ends with '.log.swp' instead of '.log'. Cause: $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf  [monitor://$SPLUNK_HOME/etc/apps/TA-eStreamer/bin/encore/data] disabled = 0 source = encore sourcetype = cisco:estreamer:data crcSalt = <SOURCE> The issue I believe is with the bolded line 'source = encore' because 'crcSalt = <SOURCE>' is also specified. Since all files have the same Source, all files have the same crcSalt which is why the actual '.log' is not collected. The '.swp' manages to get collected as Splunk checks the '.log' and since swp is a very short lived file Splunk accidentally collects a lot of garbage unrelated to the actual file contents (sorry Linux Admins for butchering the technical detail). Solution: Edit $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf and comment out the Source line, then restart Splunk services.   If someone knows of a way to override (via Local inputs.conf) source back with the filename (which changes frequently) so editing a Default inputs.conf is not necessary, please comment below. Those with the Cisco license allowing TAC Support on this add-on may want to raise this issue with them so they can fix it for new downloads and future versions -- I lack that particular license. Hope this helps someone (I did a search for encore and hex and didn't see any prior conversation on the topic).
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDum... See more...
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDump on that port I have installed the latest zScaler Splunk App (v2.0.7) and the zScaler Technical Add-on (v3.1.2)     [root@ip-10-127-0-113 apps]# ls | grep scaler TA-Zscaler_CIM zscalersplunkapp     via the WebUI, I have set up a TCP input on port 10000, set the sourcetype, app and index options. I have checked to make sure that Splunk is listening on TCP/10000 and can see that it is     [root@ip-10-127-0-113 apps]# netstat -antp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 7992/splunkd tcp 0 0 10.127.0.113:10000 x.x.x.x:38392 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:51586 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:53844 SYN_RECV -     I can't see any errors in the _internal index (although I could be searching wrong). I'm using the below search:     index=_internal "err*"     The only errors I can see relate to the 'summarize' command. Any pointers would be really appreciated. Many thanks,  
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the op... See more...
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the option to download the .csv .  I added "display.statistics.show = 1" option in the saved search    Thank you
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and... See more...
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and I would like to diasble these indexes temporarily.    I can see there is an option to disable an input in the inputs section, but same option is not available for index. Although in the index listing page it shows as enabled in the last column.  Would appreciate if someone has any solution for the problem mentioned above. Thanks.      Muzeeb
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with ... See more...
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with iterating through the table and extracting the cell values. Can anybody provide some sample JS code for identifying the table object and interating through its values? Thanks! Andrew
Hi I would like to know the list of users logging in from which region/ip
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eve... See more...
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eventlogs SourceName="my program" | stats count as COUNT_HEARTBEAT | where COUNT_HEARTBEAT=0 which works. How can I supply a list of programs and list which of them have a COUNT_HEARTBEAT of 0 so that I can make a generic alert?   Thanks,   Kind regards,   Ian
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system:... See more...
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system: system2 output should look like below: output1 output2 system1 system2 cheers.
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a s... See more...
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a specific field output and utilise as an input in the next panel . I tried base search but did not provide result as expected. Panel 1 : <query>index=xyz sourcetype=vpn *session* | fields session, connection_name, DNS, ip_subnet, Location,user | stats values(connection_name) as connection, values(Dns) as DNS, by session | join type=inner session [ search index=abc sourcetype=vpn *Dynamic* | fields assigned_ip,session | stats values(assigned_ip) as IP by session] | table User,session,connection_name,ip_subnet,IP,DNS,Location |where user="$field1$" OR connection_name="$field2$" OR session="$field3$"</query>  Once the output is generated for the above query , i would like to leverage the value displayed for Ip_subnet and use that as input for panel 2  Panel 2: <query>|inputlookup letest.csv |rename "IP address details" as IP | xyseries Ip_subnet,Location,IP | where Ip_subnet="$Ip_subnet$"</query> In panel 2 $Ip_subnet$ is input that would be taken from value of Ip_subnet of panel 1.
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Me... See more...
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Messages via EMAIL. First Problem:  According to to the Docs its not possible to send a email if your not a Admin and the SMTP server needs authentication.  Secound Problem, you can not set up per role or per user sender info only system wide via GUI.   I found out that you can supply username= and Password= parameters via SPL search but this do not apply to alerts. And the Creds then show up in plaintext in the logs.  I found that you can supply creds via alert_action.conf file per app. But then the creds would show up in the git_repo where we version our apps.    Some .conf files honor ENV variables but i did not find if alert_action.conf would do so? And then they would be still accessable by CLI.   Can it be so hard for Splunk to implement something so basic as per User email sending?   Has somebody accived something similar ?   
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and ha... See more...
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and have tried a variety of combos in the inputs.conf, but still cant solve it! I am monitoring a series of text files. Each day a new .txt file is created and events are written into this text continuously throughout the day, until the beginning of the next, where again a new file is created. the files are named as follows. Statistics_20211104_034330_840.txt The contents of the file is as follows QPS statistics: SW-Version:3.64 [UTC+00:00] time,id,valid,invalid,mode,......[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248559,1,0,A,....[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248560,1,0,A,....[ETC ETC ETC] This is what I currently have in the inputs.conf [monitor://\\Lgwnasapp002\bsr$\] disabled = false index = idx_security_scanner sourcetype = QPSdata whitelist = .+Statistics_\d{8}_\d{6}_\d{1,5}\.txt crcSalt = <SOURCE> Any ideas?
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the... See more...
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the query to get desired output ?
HI - From one button. I am looking to launch a URL (working) and reset a token "comment_token" to * with javascript (not working).. any help would be wonderful, please.   <form theme="dark" scr... See more...
HI - From one button. I am looking to launch a URL (working) and reset a token "comment_token" to * with javascript (not working).. any help would be wonderful, please.   <form theme="dark" script="someJsCode.js"> <input type="text" token="comment_token" searchWhenChanged="true"> <label>Comment</label> <default>*</default> <initialValue>*</initialValue> </input> <html> <style>.btn-primary { margin: 5px 10px 5px 0; }</style> <a href="http://$mte_machine$:4444/executeScript/envMonitoring@@qcst_processingScriptsChecks.sh/-updateComment/$runid_token$/$script_token$/$npid_token$/%22$comment_token$%22" id="buttonId" target="_blank" class="btn btn-primary" style="height:25px;width:250px;">Submit</a> </html>     Java script require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function ($, mvc) { var tokens = mvc.Components.get("default"); $('#buttonId').on("click", function (e){ tokens.set("form.comment_token", "*"); }); });   I think its this line - tokens.set("form.comment_token", "*"); but i cant be sure   
Can you please help, how to construct stats  metrics for the below docker logs. ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselin... See more...
Can you please help, how to construct stats  metrics for the below docker logs. ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselinedcategorylist;ServiceName=findXXXX;TranasactionStartTime=;TransactionEndTime=2021-11-05 05:34:34.366;TotalResponseTime=;TransactionStatus=SUCCESS;Method=GET;StatusCode=200;ErrorMsg=;CaptureLocation=MicroserviceResponse; ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselinedcategorylist;ServiceName=findXXXX;TranasactionStartTime=2021-11-05 05:34:34.264;TransactionEndTime=;TotalResponseTime=;TransactionStatus=;Method=GET;StatusCode=;ErrorMsg=;CaptureLocation=MicroserviceRequest; status should give transactioncount , transactionstatus, average, 90thP URI Method.
Hi Folks, so I have below code but for some reason my css code not rendering, what am I missing? <dashboard> <label>Processing_Step_Clone_2</label> <row> <panel id="PStitle"> <title>Processin... See more...
Hi Folks, so I have below code but for some reason my css code not rendering, what am I missing? <dashboard> <label>Processing_Step_Clone_2</label> <row> <panel id="PStitle"> <title>Processing Steps for Source$form.Source$ - $form.earliest_date$ - $form.time$</title> <html> <style> .dashboard-row #PStitle .dashboard-panel panel-title { font-size: 40px !important; color: #7FFF00; } </style> </html>
Hi all, Maybe a dummy question, do I need to setup Universal Forwarder on Splunk server to monitor and index data? (so it's like the server is forwarding data to itself) I tested setup an app in et... See more...
Hi all, Maybe a dummy question, do I need to setup Universal Forwarder on Splunk server to monitor and index data? (so it's like the server is forwarding data to itself) I tested setup an app in etc/apps/ with below config but it doesn't work. inputs.conf   [batch:///opt/splunk/temp/test_forward/*] move_policy = sinkhole disabled = 0 index = test sourcetype = test crcSalt = test _TCP_ROUTING = test   outputs.conf   [indexAndForward] index = false [tcpout] indexAndForward = false maxQueueSize = 200MB [tcpout:test] server = <server IP>:9997   Thanks
Reviewing some docs to use Splunk Cloud (trial version) with a Java App with log4j2 I need to configure a Http Event Collector to get a Token (I did this part). But in the log4j2.xml file I need to s... See more...
Reviewing some docs to use Splunk Cloud (trial version) with a Java App with log4j2 I need to configure a Http Event Collector to get a Token (I did this part). But in the log4j2.xml file I need to set the token and the URL, where or how can I get the URL? Thanks
Hi team, I have such event in splunk that log the employee number in each online meeting. I want to  find and sats the employee number distribution and percentage% I have below query that the bin ... See more...
Hi team, I have such event in splunk that log the employee number in each online meeting. I want to  find and sats the employee number distribution and percentage% I have below query that the bin span is continuous number 100. <baseQuery> |bin empNumber span=100 |stats count by empNumber |eventstats sum(count) as total |eval ratio%=round(empNumber/total*100,2) |fields - total,empNumber |sort - ratio%   But now the stats requirement is changed. Because 90% online meeting has employee number less than 100, so I want to set such not continuous bins in one query 1) for online meeting that  employee number less than 100, I want to set the bin value to 10 2)for online meeting that employee number greater than 100, I want to set the bin value to 100 And I don't want to query two times, stats by binvalue=100 first, then stats binvalue=10 again. I want to make it happen in one query. Questions: how to change  my existing query to meet the query requirement.