All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The Aruba Networks App for Splunk | Splunkbase seems to be outdated. If you want to cherry pick some of the dashboard searches you could install the App on a standalone instance, review the dashboard... See more...
The Aruba Networks App for Splunk | Splunkbase seems to be outdated. If you want to cherry pick some of the dashboard searches you could install the App on a standalone instance, review the dashboards and copy the searches to reuse it in your own app.  Please specify  your second questions to get help.  
@pm2012 Try \d+:\d+\s(?<host>\S+)  
This works fine (I used "form"  insted of "dashboard"  as a dashboard has many inputs). Thanks! Sz
I need to call a custom function inside another custom function . How to implement it?
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows thi... See more...
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows this error occasionally:   Search peer indexer-1 has the following message: The index processor has paused data flow. Too many tsidx files in idx=main bucket="/opt/SplunkData/db/defaultdb/hot_v1_13320" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.    It worked smooth with same load in lower versions! I think this is a bug in new version or some more configuration is needed! Finally, I rolled back to 9.1.3 and it now works perfectly.  
Hi @bowesmana , The allowCustomValues attribute indeed works well, but my requirement is slightly different. I have a text input box inside an HTML tag that receives cron input from the user, and th... See more...
Hi @bowesmana , The allowCustomValues attribute indeed works well, but my requirement is slightly different. I have a text input box inside an HTML tag that receives cron input from the user, and this input is then processed with a JavaScript file. Here, I'm attempting to implement a dropdown where the user can either select from predefined cron expressions or enter their own. Here's the content of my HTML tag: <html> <div> <input type="text" id="cron_input" placeholder="Enter cron expression (optional)" name="cron_command" /> </div> <div> <p/> <button id="save_cron_input" class="btn-primary">Save</button> </div> </html> Is it possible to include a dropdown with allowCustomValue inside the HTML tag (as depicted in the image below)? I aim to provide some default cron expressions to the user. The main goal of this configuration is to gather input (cron expressions) from the user. Additionally, I've included some basic cron expressions in the dropdown so the user can either select from them or enter their own. Thank you for your assistance!
Hi @KellyP , in the search you shared you forgot the join command, but anyway avoid to use join, and possible forget this command because it's very slow and resource consuming: Splunk isn't a relati... See more...
Hi @KellyP , in the search you shared you forgot the join command, but anyway avoid to use join, and possible forget this command because it's very slow and resource consuming: Splunk isn't a relational DB. t's a search engine. So you can correlate events in a different way usng stats: (index=netproxymobility sourcetype="zscalernss-web") OR index=netlte | stats values(transactionsize) AS transactionsize values(responsesize) AS responsesize values(requestsize) AS requestsize values(urlcategory) AS urlcategory values(serverip)serverip values(ClientIP) ASClientIP values(hostname) AS hostname values(appname) AS appname values(appclass) AS appclass values(urlclass) AS urlclass values(IMEI) AS IMEI BY ClientIP if you want onlythe events in both the indexes, you can add an additional clause: (index=netproxymobility sourcetype="zscalernss-web") OR index=netlte | stats values(transactionsize) AS transactionsize values(responsesize) AS responsesize values(requestsize) AS requestsize values(urlcategory) AS urlcategory values(serverip)serverip values(ClientIP) ASClientIP values(hostname) AS hostname values(appname) AS appname values(appclass) AS appclass values(urlclass) AS urlclass values(IMEI) AS IMEI dc(index) AS index_count BY ClientIP | where index_count=2 | fields - index_count Ciao. Giuseppe
Hi @KendallW , check if the issue is related to the header or to thwe regex: use a sourcetype instead of host in the stanza header. Sometimes I found an issue using host or source instead sourcetyp... See more...
Hi @KendallW , check if the issue is related to the header or to thwe regex: use a sourcetype instead of host in the stanza header. Sometimes I found an issue using host or source instead sourcetype. Ciao. Giuseppe
Hi @slearntrain , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, has there been any update since?
There is a recent question about doing this in dashboard.  Using a time selector token would be the cleanest. If you are not doing it in dashboard, but this is a singular use case, I can think of an... See more...
There is a recent question about doing this in dashboard.  Using a time selector token would be the cleanest. If you are not doing it in dashboard, but this is a singular use case, I can think of an ugly map, like this   | makeresults | addinfo | eval start_24h_earlier = relative_time(info_min_time, "-24h") | map start_24h_earlier search="search index=netlte earliest=$start_24h_earlier | dedup ClientIP | fields ClientIP IMEI" | join ClientIP [index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass]   Here is a proof of concept:   | makeresults | addinfo | eval int_start = relative_time(info_min_time, "-24h"), int_end = relative_time(info_max_time, "-4h") | map info_min_time info_max_time search="search index=_audit earliest=$int_start$ latest=$int_end$ | stats min(_time) as in_begin max(_time) as in_end by action" | join action [search index = _audit | stats min(_time) as out_begin max(_time) as out_end by action] | fieldformat in_begin = strftime(in_begin, "%F %T") | fieldformat in_end = strftime(in_end, "%F %T") | fieldformat out_begin = strftime(out_begin, "%F %T") | fieldformat out_end = strftime(out_end, "%F %T")   My output looks like action in_begin in_end out_begin out_end expired_session_token 2024-03-24 23:23:20 2024-03-25 12:25:33 2024-03-25 22:33:51 2024-03-25 23:19:51 login attempt 2024-03-24 22:23:12 2024-03-25 04:25:11 2024-03-25 21:33:26 2024-03-25 21:33:26 quota 2024-03-24 22:23:16 2024-03-25 04:25:17 2024-03-25 21:41:52 2024-03-25 23:21:45 read_session_token 2024-03-24 19:21:02 2024-03-25 19:18:58 2024-03-25 20:17:30 2024-03-25 23:21:49 search 2024-03-24 19:37:09 2024-03-25 11:29:40 2024-03-25 21:15:35 2024-03-25 23:21:05 update 2024-03-24 19:51:57 2024-03-25 09:15:17 2024-03-25 21:13:23 2024-03-25 23:09:53 validate_token 2024-03-24 19:21:02 2024-03-25 19:18:58 2024-03-25 20:17:30 2024-03-25 23:21:49
Hi @KendallW   Thanks for the reply, but that does not work as I'm plotting this in the line chart.   The data is coming to SignalFx from the StatsD agent.   
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ;... See more...
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure Mar 22 04:00:01 hostname2 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure 2024-03-21T23:59:31.143161+05:30 hostname3 caam: [INVENTORY|CaaM-14a669917c4a02f5|caam|e0ded6f4f97c17132995|Dummy-5|INFO|caam_inventory_controller] Fetching operationexecutions filtering with vn_id CaaM-3ade67652a6a02f5 and tenant caam 2024-03-23T04:00:17.664082+05:30 hostname4 sudo: root : TTY=unknown ; PWD=/home/caam/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-22 -e Mar\ 22 /var/log/secure.7.gz  
Alright!!! I found the answer to this question- Modified the below query by changing the time formats of the new fields and then pulling out the difference -- index="abc" sourcetype=openshift_logs ... See more...
Alright!!! I found the answer to this question- Modified the below query by changing the time formats of the new fields and then pulling out the difference -- index="abc" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd*" | rex field=_raw "\"Application-ID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" |stats latest(eval(if(steptype="EndNBflow",max(infotime),0))) AS endNBflow , latest(eval(if(steptype="Deserialized payload",infotime,0))) AS endPayLoad , count(eval(match(_raw,"error"))) as error_count, dc(steptype) as unique_steptypes BY appid|where unique_steptypes >= 16 AND error_count=0|eval endNBflowtime=strptime(endNBflow, "%Y-%m-%d %H:%M:%S,%3N") | eval endPayLoadtime=strptime(endPayLoad, "%Y-%m-%d %H:%M:%S,%3N") | eval time_difference = endNBflowtime - endPayLoadtime | table appid,endNBflow,endPayLoad, endNBflowtime, endPayLoadtime, time_difference   Results are like this : Appid endNBflow Endpayload endNBflowtime Endpayloadtime responsetime Abcd1 2024-03-04 16:10:50,007 2024-03-04 16:10:49,886 1709529050.007000 1709529049.886000 0.121000
Thanks a lot yuanli,    That worked , you are a genuis. I thought I could never structure it in splunk.  I did speak to the dev team , apparently the json is structured in that way to feed a parti... See more...
Thanks a lot yuanli,    That worked , you are a genuis. I thought I could never structure it in splunk.  I did speak to the dev team , apparently the json is structured in that way to feed a particular dashboard system that they use. So it has to be in that structure for that system to consume. However they have agreed to update the structure in the next release which could be a few months away (6 months atleast) . So in the mean time I could work with this bad json until then.    Thanks  a lot again. I had searched in splunk for something like this before and havent seen anything. 
check permission issue in log files splunkd ... mongod logs Check file ownership and permission for server.pem, server.key
Hi @dongwonn a few things to check -check the host field in Splunk matches the host:: stanza in your props.conf -Since you are not explicitly specifying a lot of configs, they may be taking default... See more...
Hi @dongwonn a few things to check -check the host field in Splunk matches the host:: stanza in your props.conf -Since you are not explicitly specifying a lot of configs, they may be taking default values from other places. Use btool to check the full props settings being applied to this host: $SPLUNK_HOME/bin/splunk cmd btool props list host::x.x.x.21 -Update your TIME_PREFIX to capture the full string before the timestamp beginning at the start of the event, so that Splunk will definitely exclude the preceding timestamps. Example: TIME_PREFIX=^\w{3}\s\d\d\s(\d{2}\:?){3}\s(\d{0,3}\.?){4}\s\w{3}\s\d\d\s(\d{2}\:?){3}\s[\w\s]+\-:\s\[  
Hi @sks if you want just the percentage of misses and the percentage of hits, you can do this with eval: | eval Bperc=('B'/(B+C))*100 | eval Cperc=('C'/(B+C))*100 If you want to show this in a char... See more...
Hi @sks if you want just the percentage of misses and the percentage of hits, you can do this with eval: | eval Bperc=('B'/(B+C))*100 | eval Cperc=('C'/(B+C))*100 If you want to show this in a chart (e.g. pie chart) you don't need to calculate the percentage as Splunk will do this for you, but you will need to get the values of B and C in the same column using transpose. Example:  
Hi Giuseppe, I want to view the results in the below format. I also want the diff time in human readable format like 10sec, 15 mins etc. Appid Responsetime(Diff) In my usecase- I have ... See more...
Hi Giuseppe, I want to view the results in the below format. I also want the diff time in human readable format like 10sec, 15 mins etc. Appid Responsetime(Diff) In my usecase- I have more that 5000 messages, each successful message has 16 steptypes, so I have put the query in this way.- index="abc" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd*" | rex field=_raw "\"Application-ID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" |stats latest(eval(if(steptype="EndNBflow",max(infotime),0))) AS endNBflow latest(eval(if(steptype="Deserialized payload",infotime,0))) AS endPayLoad dc(steptype) as unique_steptypes by appid|where unique_steptypes >= 16 |eval diff=endNBflow-endPayLoad   My earlier code included-  | rex field=_raw "INFO  ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | stats max(infotime) as maxinfotime, min(infotime) as mininfotime,count(eval(match(_raw, "error"))) as error_count, dc(steptype) as unique_steptypes by appid | where error_count = 0 | eval maxtime=strptime(maxinfotime,"%Y-%m-%d %H:%M:%S,%3N") | eval mintime=strptime(mininfotime,"%Y-%m-%d %H:%M:%S,%3N") |  eval TimeDiff=maxtime-mintime | eval TimeDiff_formated = strftime(TimeDiff,"%H:%M:%S,%3N")| where unique_steptypes >= 16|sort steptype | table appid, mininfotime, maxinfotime, mintime, maxtime, TimeDiff_formated, unique_steptypes, flowname I am unable to club these two and get the expected output. 
Check if this is the same issue: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-TA-New-Relic-Insight-not-ingesting-data/m-p/528756