All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a s... See more...
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a specific field output and utilise as an input in the next panel . I tried base search but did not provide result as expected. Panel 1 : <query>index=xyz sourcetype=vpn *session* | fields session, connection_name, DNS, ip_subnet, Location,user | stats values(connection_name) as connection, values(Dns) as DNS, by session | join type=inner session [ search index=abc sourcetype=vpn *Dynamic* | fields assigned_ip,session | stats values(assigned_ip) as IP by session] | table User,session,connection_name,ip_subnet,IP,DNS,Location |where user="$field1$" OR connection_name="$field2$" OR session="$field3$"</query>  Once the output is generated for the above query , i would like to leverage the value displayed for Ip_subnet and use that as input for panel 2  Panel 2: <query>|inputlookup letest.csv |rename "IP address details" as IP | xyseries Ip_subnet,Location,IP | where Ip_subnet="$Ip_subnet$"</query> In panel 2 $Ip_subnet$ is input that would be taken from value of Ip_subnet of panel 1.
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Me... See more...
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Messages via EMAIL. First Problem:  According to to the Docs its not possible to send a email if your not a Admin and the SMTP server needs authentication.  Secound Problem, you can not set up per role or per user sender info only system wide via GUI.   I found out that you can supply username= and Password= parameters via SPL search but this do not apply to alerts. And the Creds then show up in plaintext in the logs.  I found that you can supply creds via alert_action.conf file per app. But then the creds would show up in the git_repo where we version our apps.    Some .conf files honor ENV variables but i did not find if alert_action.conf would do so? And then they would be still accessable by CLI.   Can it be so hard for Splunk to implement something so basic as per User email sending?   Has somebody accived something similar ?   
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and ha... See more...
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and have tried a variety of combos in the inputs.conf, but still cant solve it! I am monitoring a series of text files. Each day a new .txt file is created and events are written into this text continuously throughout the day, until the beginning of the next, where again a new file is created. the files are named as follows. Statistics_20211104_034330_840.txt The contents of the file is as follows QPS statistics: SW-Version:3.64 [UTC+00:00] time,id,valid,invalid,mode,......[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248559,1,0,A,....[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248560,1,0,A,....[ETC ETC ETC] This is what I currently have in the inputs.conf [monitor://\\Lgwnasapp002\bsr$\] disabled = false index = idx_security_scanner sourcetype = QPSdata whitelist = .+Statistics_\d{8}_\d{6}_\d{1,5}\.txt crcSalt = <SOURCE> Any ideas?
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the... See more...
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the query to get desired output ?
HI - From one button. I am looking to launch a URL (working) and reset a token "comment_token" to * with javascript (not working).. any help would be wonderful, please.   <form theme="dark" scr... See more...
HI - From one button. I am looking to launch a URL (working) and reset a token "comment_token" to * with javascript (not working).. any help would be wonderful, please.   <form theme="dark" script="someJsCode.js"> <input type="text" token="comment_token" searchWhenChanged="true"> <label>Comment</label> <default>*</default> <initialValue>*</initialValue> </input> <html> <style>.btn-primary { margin: 5px 10px 5px 0; }</style> <a href="http://$mte_machine$:4444/executeScript/envMonitoring@@qcst_processingScriptsChecks.sh/-updateComment/$runid_token$/$script_token$/$npid_token$/%22$comment_token$%22" id="buttonId" target="_blank" class="btn btn-primary" style="height:25px;width:250px;">Submit</a> </html>     Java script require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function ($, mvc) { var tokens = mvc.Components.get("default"); $('#buttonId').on("click", function (e){ tokens.set("form.comment_token", "*"); }); });   I think its this line - tokens.set("form.comment_token", "*"); but i cant be sure   
Can you please help, how to construct stats  metrics for the below docker logs. ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselin... See more...
Can you please help, how to construct stats  metrics for the below docker logs. ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselinedcategorylist;ServiceName=findXXXX;TranasactionStartTime=;TransactionEndTime=2021-11-05 05:34:34.366;TotalResponseTime=;TransactionStatus=SUCCESS;Method=GET;StatusCode=200;ErrorMsg=;CaptureLocation=MicroserviceResponse; ThreadID=124;ThreadIDHex=0000007c;ThreadName=[XNIO-2 task-32];Node=XXXXXX;TransID=;ConsumerSenderID=NA;URI=/getBaselinedcategorylist;ServiceName=findXXXX;TranasactionStartTime=2021-11-05 05:34:34.264;TransactionEndTime=;TotalResponseTime=;TransactionStatus=;Method=GET;StatusCode=;ErrorMsg=;CaptureLocation=MicroserviceRequest; status should give transactioncount , transactionstatus, average, 90thP URI Method.
Hi Folks, so I have below code but for some reason my css code not rendering, what am I missing? <dashboard> <label>Processing_Step_Clone_2</label> <row> <panel id="PStitle"> <title>Processin... See more...
Hi Folks, so I have below code but for some reason my css code not rendering, what am I missing? <dashboard> <label>Processing_Step_Clone_2</label> <row> <panel id="PStitle"> <title>Processing Steps for Source$form.Source$ - $form.earliest_date$ - $form.time$</title> <html> <style> .dashboard-row #PStitle .dashboard-panel panel-title { font-size: 40px !important; color: #7FFF00; } </style> </html>
Hi all, Maybe a dummy question, do I need to setup Universal Forwarder on Splunk server to monitor and index data? (so it's like the server is forwarding data to itself) I tested setup an app in et... See more...
Hi all, Maybe a dummy question, do I need to setup Universal Forwarder on Splunk server to monitor and index data? (so it's like the server is forwarding data to itself) I tested setup an app in etc/apps/ with below config but it doesn't work. inputs.conf   [batch:///opt/splunk/temp/test_forward/*] move_policy = sinkhole disabled = 0 index = test sourcetype = test crcSalt = test _TCP_ROUTING = test   outputs.conf   [indexAndForward] index = false [tcpout] indexAndForward = false maxQueueSize = 200MB [tcpout:test] server = <server IP>:9997   Thanks
Reviewing some docs to use Splunk Cloud (trial version) with a Java App with log4j2 I need to configure a Http Event Collector to get a Token (I did this part). But in the log4j2.xml file I need to s... See more...
Reviewing some docs to use Splunk Cloud (trial version) with a Java App with log4j2 I need to configure a Http Event Collector to get a Token (I did this part). But in the log4j2.xml file I need to set the token and the URL, where or how can I get the URL? Thanks
Hi team, I have such event in splunk that log the employee number in each online meeting. I want to  find and sats the employee number distribution and percentage% I have below query that the bin ... See more...
Hi team, I have such event in splunk that log the employee number in each online meeting. I want to  find and sats the employee number distribution and percentage% I have below query that the bin span is continuous number 100. <baseQuery> |bin empNumber span=100 |stats count by empNumber |eventstats sum(count) as total |eval ratio%=round(empNumber/total*100,2) |fields - total,empNumber |sort - ratio%   But now the stats requirement is changed. Because 90% online meeting has employee number less than 100, so I want to set such not continuous bins in one query 1) for online meeting that  employee number less than 100, I want to set the bin value to 10 2)for online meeting that employee number greater than 100, I want to set the bin value to 100 And I don't want to query two times, stats by binvalue=100 first, then stats binvalue=10 again. I want to make it happen in one query. Questions: how to change  my existing query to meet the query requirement.  
Howdy All, I am looking for some assistance with a SEDCMD.  I am trying to clean up some XmlWineventlog:security events, particularly the 4688 Event, where we are capturing command line for processe... See more...
Howdy All, I am looking for some assistance with a SEDCMD.  I am trying to clean up some XmlWineventlog:security events, particularly the 4688 Event, where we are capturing command line for processes running.  We are finding that this is causing us some ingestion woes at the moment, with some _raw event sizes being over 5kb each.  So we are trying to clean up  some of the Normal Noise in this <data> Name='CommandLine'> ...... </data> for certain processes. These are currently being collected by the Windows TA on Universal Forwarders on each desktop, so will be added to the local/props.conf The first one I have looked at, is something our Citrix VPN client does, which is spawn some powershell. the event is rather large, so looking to strip out the content and replace it with some meaningful text.  I have confirmed the regex works, I am just wanting advise on the actual SEDCMD. Does this appear correct?     SEDCMD-cleanxmlcitrixcommandprocess = s/("powershell\.exe" "-Command\s{6})((\$version='1\.0\.0\.0'\s\$application='CitrixVPN')[\S\s\r\n]+("))/gm /Citrix Command process/     And in the props.conf file, should this be under: [source::WinEventLog:Security] or [source::XmlWinEventLog:Security] - which is the current sourcetype of the event when searching on the indexer.   Any assistance would be greatly appreciated.
I need general direction to upgrade from 7.x to 8.2.3 (latest). I have Splunk Ent. & ES plus many Apps & TAs (Multi cluster, SH cluster, AWS is where Splunk resides. I have learned that need to upg. ... See more...
I need general direction to upgrade from 7.x to 8.2.3 (latest). I have Splunk Ent. & ES plus many Apps & TAs (Multi cluster, SH cluster, AWS is where Splunk resides. I have learned that need to upg. from 7.x to 8.0 / 8.1 And then from 8.0 / 8.1 to 8.2.3. The question is do I need to upgrade python once during upgrading to 8.0 / 8.1 and the upgrade Python when upgrading from 8.0 / 8.1 to 8.2.3 ? I could use any valuable advices you might have in this process as well.
Good afternoon i'm wondering if I may be able to get a bit of help with this one as I'm struggling on trying to achieve what I want.  I would like to query my 3 servers about their hardware status s... See more...
Good afternoon i'm wondering if I may be able to get a bit of help with this one as I'm struggling on trying to achieve what I want.  I would like to query my 3 servers about their hardware status such as how much space is on the HDD etc etc however i'm really struggling to get my head around how to go about achieving this.  I've seen a few posts on here which refer to making changes to the input.conf file by adding perfmon but firstly i'm not 100% sure on which input.conf i should be doing this on  (i'm presuming the forwarder) if this is at all the case, and secondly i'm not sure where and how this information in gleamed from.  If anyone would be able to point my in the right direction to a resource that is a step by step guide (or there abouts) i would be very grateful. TIA
Hello Can I get the searchid for the search that is triggered by a dashboard? What is the syntax to use this searchid to create an alert? Will this searchid trigger the search of the same dashboar... See more...
Hello Can I get the searchid for the search that is triggered by a dashboard? What is the syntax to use this searchid to create an alert? Will this searchid trigger the search of the same dashboard even if the underlying search code of the dashboard changes? Thanks!
Hi, I am unable to access Web Interface of a Splunk Indexer server. When Try access using the url  https://splunkindexerservr:8000 getting  can't reach this page error   Thank you for your sugges... See more...
Hi, I am unable to access Web Interface of a Splunk Indexer server. When Try access using the url  https://splunkindexerservr:8000 getting  can't reach this page error   Thank you for your suggestion
Hello, I have a timezone issue that I don't understand. I have two set of indexed logs in different indexes, indexed by the same indexer. The sourcetype is the same for both. I don't explicitly mod... See more...
Hello, I have a timezone issue that I don't understand. I have two set of indexed logs in different indexes, indexed by the same indexer. The sourcetype is the same for both. I don't explicitly modify the timezone anywhere. For the first index the _time shown is the right one (same as the one in the log itself). Server 1 - _time OK For the second index the _time is 1 hour behind since the daylight saving time a few days ago. If I look at the _time field, it has however the right date_hour but it shows something different. Server2 - _time wrong I checked on the servers where the logs are generated but they are running the same (and right) timezone: CET. I am lost about this issue, any suggestions on where I should look?
We have multisite indexer cluster spanning across 2 DCs, one on west coast and another on east coast. I am now working on the project to move from a single search head to multisite search head clust... See more...
We have multisite indexer cluster spanning across 2 DCs, one on west coast and another on east coast. I am now working on the project to move from a single search head to multisite search head cluster setup. I have trouble understanding what the benefit of turning off the search affinity in the SHC really is. My understanding is that search affinity reduces traffic between sites because search heads only get results from indexers on their local site, meaning searches can run faster? (Ref: https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/Multisitesearchaffinity) However, this SHC documents, https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/DeploymultisiteSHC, recommends turning search affinity off so that: Search heads run searches across indexers spanning all sites If, instead, you set different search heads to different sites, the end user might notice lag time in getting some results, depending on which search head happens to run a particular search. Well, wouldn't turning off search affinity make searches run slower if a search head gets results it needs from an indexer from another site? It sounds to me like these 2 documentations contradict each other, unless I'm missing something.
from checkbox value, if i choose multiple sites, i would like to show all sites separate line chart for average trackout time. now the problem if i choose multiple sites, it only show one line chart ... See more...
from checkbox value, if i choose multiple sites, i would like to show all sites separate line chart for average trackout time. now the problem if i choose multiple sites, it only show one line chart by coming all sites average trackout value. query: MicronSite IN($site$) index=mtparam sourcetype=CommandTimesByArea | rex field=_raw "Fabwide:AvgTotalTrackoutTime\s+(?\d+)" | timechart span=12h avg(AvgTotalTrackoutTime) aligntime=@d+7h avg(command_time) For example: from check box list i choose "F10N" and "F10W". but in chart, only show one line by combing those two site's average trackout time values and show one chart. i would like to show two separate line , one line for F10N's average trackout time and another line for F10W's average trackout time.  please help to suggest for this issue
Hi,    I have a log file looks like below. In first block of logs i need to extract x value1 and in second block of logs i need to extract the x value2. If both the values matches i need to run anot... See more...
Hi,    I have a log file looks like below. In first block of logs i need to extract x value1 and in second block of logs i need to extract the x value2. If both the values matches i need to run another query to get the output of proj_id.   Logs: Code=Info words=check text=Checking for messages... received \x1B[0;m job\x1B[0;x=value1 proj_id=edcbidh Code=Info words=check text=\x1B[0;33mwarning:  failed \x1B[0;m \x1B[0;33mduration\x1B[0;m=00.0006ms \x1B[0;33mjob\x1B[0;x=value2 \x1B[0;   Any help is appreciated.   Thanks.
Configured a CloudWatch Logs input using Splunk Web using Splunk addon for aws  Create New Input -> VPC Flow Logs -> CloudWatch Logs  validation error Value '*' at 'logGroupName' failed to satisfy ... See more...
Configured a CloudWatch Logs input using Splunk Web using Splunk addon for aws  Create New Input -> VPC Flow Logs -> CloudWatch Logs  validation error Value '*' at 'logGroupName' failed to satisfy constraint and Member must satisfy regular expression pattern: [\\.\\-_/#A-Za-z0-9]+"} Even I tried with comma, It could not work out  So tried with group value "AWS/VPCFLOWLOGS" exceptions.ResourceNotFoundException: ResourceNotFoundException: 400 Bad Request {'__type': 'ResourceNotFoundException', 'message': 'The specified log group does not exist.' Any solution to get the logs to splunk ?