All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are... See more...
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are duplicit, while others are unique. My aim is to understand the duplication and be able to explain what events exactly get duplicated. I am detecting duplicities using this SPL:   index="myindex" sourcetype="mysourcetype" | eventstats count AS duplicates BY _time, _raw   Now I need to identify what fields or their combination make the difference, under what circumstances the event is ingested twice. I tried to use predict command, however it is somehow producing new values for "duplicates" field, but it does not disclose the rule by which it makes the decision. In other words, I am not interested in prediction itself, I want to know the predictors. Is something like that possible in SPL?
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  ... See more...
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  [2019-12-12 17:30:08,091] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,093] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Host: localhost:8243[\r][\n]" [2019-12-12 17:30:08,094] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "User-Agent: curl/7.54.0[\r][\n]" [2019-12-12 17:30:08,095] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "accept: */*[\r][\n]" [2019-12-12 17:30:08,096] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" [2019-12-12 17:30:08,097] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "[\r][\n]" [2019-12-12 17:30:08,105] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,106] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "accept: */*[\r][\n]" [2019-12-12 17:30:08,107] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Host: www.mocky.io[\r][\n]" [2019-12-12 17:30:08,108] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]" [2019-12-12 17:30:08,109] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" [2019-12-12 17:30:08,110] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "[\r][\n]" [2019-12-12 17:30:08,266] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,268] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Server: Cowboy[\r][\n]" [2019-12-12 17:30:08,269] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Connection: keep-alive[\r][\n]" [2019-12-12 17:30:08,271] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,272] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,273] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Length: 20[\r][\n]" [2019-12-12 17:30:08,274] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,275] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "[\r][\n]" [2019-12-12 17:30:08,276] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "{ "hello": "world" }" [2019-12-12 17:30:08,282] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,283] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Expose-Headers: [\r][\n]" [2019-12-12 17:30:08,284] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Origin: *[\r][\n]" [2019-12-12 17:30:08,285] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Methods: GET[\r][\n]" [2019-12-12 17:30:08,286] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,288] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,289] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Transfer-Encoding: chunked[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "14[\r][\n]" [2019-12-12 17:30:08,291] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "{ "hello": "world" }[\r][\n]" [2019-12-12 17:30:08,292] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "0[\r][\n]" [2019-12-12 17:30:08,293] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" And also according to the doc :  DEBUG - wire >> Represents the message coming into the API Gateway from the wire. DEBUG - wire << Represents the message that goes to the wire from the API Gateway.   I use AWS Lambda to retrieve the WSO2-APIM logs, which are stored in AWS CloudWatch. I've just started using Splunk so I'm not very good at SPL. I would like Splunk to process events with SPL and then output something like this : Date, loglevel, action_https, correlationID, message, duration [2019-12-12 17:30:08,091], DEBUG, HTTPS-Listener, dispatcher-5, "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" "Host: localhost:8243[\r][\n]" "User-Agent: curl/7.54.0[\r][\n]" "accept: */*[\r][\n]" "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" "[\r][\n]", 006 [2019-12-12 17:30:08,105], DEBUG, HTTPS-Listener, dispatcher-1, "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" "accept: */*[\r][\n]" "Host: www.mocky.io[\r][\n]" "Connection: Keep-Alive[\r][\n]" "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" "[\r][\n]", 005 [2019-12-12 17:30:08,266], DEBUG, HTTPS-Sender, dispatcher-1, "HTTP/1.1 200 OK[\r][\n]" "Server: Cowboy[\r][\n]" "Connection: keep-alive[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Content-Type: application/json[\r][\n]" "Content-Length: 20[\r][\n]" "Via: 1.1 vegur[\r][\n]" "[\r][\n]" "{ "hello": "world" }", 010 [2019-12-12 17:30:08,282], DEBUG, HTTPS-Listener, dispatcher-5, "HTTP/1.1 200 OK[\r][\n]" "Access-Control-Expose-Headers: [\r][\n]" "Access-Control-Allow-Origin: *[\r][\n]" "Access-Control-Allow-Methods: GET[\r][\n]" "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" "Content-Type: application/json[\r][\n]" "Via: 1.1 vegur[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "[\r][\n]" "14[\r][\n]" "{ "hello": "world" }[\r][\n]" "0[\r][\n]" "[\r][\n]", 011 Do you have any ideas on how to do this with SPL in the Search App? Thank you for those who took the time to read and reply to me.
@BRFZ  If you have no cluster the data are not replicated. So if one indexer goes down your search couldn't access all data.
@PaulPanther Thank you for your response, and does it not have any impact given that the indexers are not in a cluster?
1. Create the neccessary indexes on your indexer 2. Configure Best practice: Forward search head data to the indexer layer - Splunk Documentation
As mentioned by @Atyuha.Pal , we can make use of Metric Browser to create one. Here's a simple workaround: If you open the Transaction Score Card and double-click on any of the bar chart,  it will ... See more...
As mentioned by @Atyuha.Pal , we can make use of Metric Browser to create one. Here's a simple workaround: If you open the Transaction Score Card and double-click on any of the bar chart,  it will take you to the Metric Browser where you can see the metrics used e.g. Number of Slow Call, Stall Count etc. Using that, you can create a dashboard by inputing those metrics.  We have Slow, Vey Slow, Error, Stall. So, the tricky part is to create the "Normal". We can use the metric expression to derive Normal = Call per Min - Slow - Very Slow - Stall - Error I did a quick check and the value looks correct. thanks, Terence
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index... See more...
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers. Can you help me? Thank you.
I see that it is quite some time since you posted this question. Just wanted to "second it", as I am working with hardening a Splunk platform myself at the moment and am wondering about the same thin... See more...
I see that it is quite some time since you posted this question. Just wanted to "second it", as I am working with hardening a Splunk platform myself at the moment and am wondering about the same thing. By chance, have you found any answers?
You should find all neccessary information about Splunk TAs and the kv store in your "_internal" index. As a second step you could check the "source" field for the TAs that you want to monitor. Most... See more...
You should find all neccessary information about Splunk TAs and the kv store in your "_internal" index. As a second step you could check the "source" field for the TAs that you want to monitor. Most of the available TAs are writing logs in their own logfile under $SPLUNK_HOME/var/log/splunk.  For the kv store check the mongod.log. More information: What Splunk software logs about itself - Splunk Documentation
Thanks, even if the query consumes a lot, but it works
The Aruba Networks App for Splunk | Splunkbase seems to be outdated. If you want to cherry pick some of the dashboard searches you could install the App on a standalone instance, review the dashboard... See more...
The Aruba Networks App for Splunk | Splunkbase seems to be outdated. If you want to cherry pick some of the dashboard searches you could install the App on a standalone instance, review the dashboards and copy the searches to reuse it in your own app.  Please specify  your second questions to get help.  
@pm2012 Try \d+:\d+\s(?<host>\S+)  
This works fine (I used "form"  insted of "dashboard"  as a dashboard has many inputs). Thanks! Sz
I need to call a custom function inside another custom function . How to implement it?
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows thi... See more...
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows this error occasionally:   Search peer indexer-1 has the following message: The index processor has paused data flow. Too many tsidx files in idx=main bucket="/opt/SplunkData/db/defaultdb/hot_v1_13320" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.    It worked smooth with same load in lower versions! I think this is a bug in new version or some more configuration is needed! Finally, I rolled back to 9.1.3 and it now works perfectly.  
Hi @bowesmana , The allowCustomValues attribute indeed works well, but my requirement is slightly different. I have a text input box inside an HTML tag that receives cron input from the user, and th... See more...
Hi @bowesmana , The allowCustomValues attribute indeed works well, but my requirement is slightly different. I have a text input box inside an HTML tag that receives cron input from the user, and this input is then processed with a JavaScript file. Here, I'm attempting to implement a dropdown where the user can either select from predefined cron expressions or enter their own. Here's the content of my HTML tag: <html> <div> <input type="text" id="cron_input" placeholder="Enter cron expression (optional)" name="cron_command" /> </div> <div> <p/> <button id="save_cron_input" class="btn-primary">Save</button> </div> </html> Is it possible to include a dropdown with allowCustomValue inside the HTML tag (as depicted in the image below)? I aim to provide some default cron expressions to the user. The main goal of this configuration is to gather input (cron expressions) from the user. Additionally, I've included some basic cron expressions in the dropdown so the user can either select from them or enter their own. Thank you for your assistance!
Hi @KellyP , in the search you shared you forgot the join command, but anyway avoid to use join, and possible forget this command because it's very slow and resource consuming: Splunk isn't a relati... See more...
Hi @KellyP , in the search you shared you forgot the join command, but anyway avoid to use join, and possible forget this command because it's very slow and resource consuming: Splunk isn't a relational DB. t's a search engine. So you can correlate events in a different way usng stats: (index=netproxymobility sourcetype="zscalernss-web") OR index=netlte | stats values(transactionsize) AS transactionsize values(responsesize) AS responsesize values(requestsize) AS requestsize values(urlcategory) AS urlcategory values(serverip)serverip values(ClientIP) ASClientIP values(hostname) AS hostname values(appname) AS appname values(appclass) AS appclass values(urlclass) AS urlclass values(IMEI) AS IMEI BY ClientIP if you want onlythe events in both the indexes, you can add an additional clause: (index=netproxymobility sourcetype="zscalernss-web") OR index=netlte | stats values(transactionsize) AS transactionsize values(responsesize) AS responsesize values(requestsize) AS requestsize values(urlcategory) AS urlcategory values(serverip)serverip values(ClientIP) ASClientIP values(hostname) AS hostname values(appname) AS appname values(appclass) AS appclass values(urlclass) AS urlclass values(IMEI) AS IMEI dc(index) AS index_count BY ClientIP | where index_count=2 | fields - index_count Ciao. Giuseppe
Hi @KendallW , check if the issue is related to the header or to thwe regex: use a sourcetype instead of host in the stanza header. Sometimes I found an issue using host or source instead sourcetyp... See more...
Hi @KendallW , check if the issue is related to the header or to thwe regex: use a sourcetype instead of host in the stanza header. Sometimes I found an issue using host or source instead sourcetype. Ciao. Giuseppe
Hi @slearntrain , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi, has there been any update since?