All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @yuanliu says, you will have a much better chance of getting a useful answer if you follow some simple guidelines. It is not possible to tell from what you have posted so far what your events act... See more...
As @yuanliu says, you will have a much better chance of getting a useful answer if you follow some simple guidelines. It is not possible to tell from what you have posted so far what your events actually look like (or a close anonymised representation of them), nor what it is you are trying to determine from your search. For example, do all the rule fields have either "None" or a rule name in? If so, isnotnull() will always return true, hence all your rules are coming through as "urlurlelabel" (should this be "urlrulelabel"?) Can there be more than one rule label you are interested in for an event or does apprulelabel always take presidence, even if it is "None", which is what your search is doing, hence the high count for "None"? Please provide more relevant information (if you would like more help).
It doesn't go in the query. The legend is a feature of the viz. The charting option goes in the SimpleXML source. https://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference ... See more...
It doesn't go in the query. The legend is a feature of the viz. The charting option goes in the SimpleXML source. https://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference  
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app... See more...
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:00.406 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:00.406 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:04.800 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=start_task_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml 06-22-2024 14:54:04.842 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=start_dbxquery_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbxquery_server.yml 06-22-2024 14:54:04.981 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:04.980 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.015 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.013 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.102 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:05.101 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.129 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.129 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.214 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:108)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:69)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.215 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.215 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:74)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:05.233 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ And here is the mongod.log 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2024-06-19T07:46:17.513Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2024-06-19T07:46:17.522Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2024-06-19T07:46:17.524Z W ASIO [main] No TransportLayer configured during NetworkInterface startup 2024-06-19T07:46:17.527Z I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open I tried create new ssl certificate but it doesnt work. And tried change the permission of the /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key still encountering same error. What should i do? Please help.
My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ?  Like @ITWhisperer said, you didn't explain what you expect to get by saving the dropped even... See more...
My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ?  Like @ITWhisperer said, you didn't explain what you expect to get by saving the dropped events WHEN App_name IN (*) is in force.  Unless you illustrate the desired output - which is an essential part of an answerable question, your question is a simple statement of contradictions. Now I suspect you do not merely want to contradict yourself.  Let me try mind reading: you want a count of events with App_Name, and a separate count for events without. index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* | eval app_name_or_no = if(isnull(App_Name), "no", "yes") | stats count by app_name_or_no  If this is tea leaf is telling, the question has nothing to do with events being dropped. One more thing, I don't see any point of inserting that search command on the 4th line.  It is much more effective if you throw all filters in index search.  What's wrong with this? index=msad_hcv NOT ("forwarded") Environment=* type=* request.path=* | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | eval app_name_or_no = if(isnull(App_Name), "no", "yes") | stats count by app_name_or_no  
Have you tried the search I suggested?  That does exactly what you are saying here, and doesn't use lookup. (I understand field_A, field_B, etc., are standins for real field names.)
Without other samples, I'd go with the most aggressive: | rex "title=(?<title>[^\"]+)"
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till da... See more...
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till data has been replicated as per rf value and in such scenario UF will hold the data block in its queue. Would ingestion stall when queue is full (it's default size is only few MBs) even though data ingestion is working on another site. And if so, ingestion would completely stop even when ingestion is working fine only replication is impacted. I am looking for below journeys: UK --> Intermediate Heavy Forwarder --> Indexer pcf microservices --> Intermediate Heavy Forwarder --> Indexer
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts?... See more...
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts? - it's the search result, and should be replicated but what is the content of that? what is the the knowledge bundle? - a set of configuration which directly transfer from the Search heads to the search peers, but why?   Thanks in advance!
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-1234... See more...
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-123456789123", tkt=INC123456789, ci=SAP, state=Escalate, opened=2024-06-22 00:31:06, closed=0001-01-01 00:00:00 +0000 UTC, title=server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   How do I write a query that will extract this string "server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   Thank you very much
Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to wh... See more...
Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to what might be wrong?
Hi @Anud, We can optimize your search if you provide mock samples of your data, but here's an example using makeresults and your current search structure to simulate the fields required by the visua... See more...
Hi @Anud, We can optimize your search if you provide mock samples of your data, but here's an example using makeresults and your current search structure to simulate the fields required by the visualization: | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="CONN FAILED", 1, 0))) as error sum(eval(if(FINAL=="MEND FAIL", 1, 0))) as warn avg(QUE_DEP) as label by QUE_NAM | rename QUE_NAM as to | eval from="internal", label="Avg: ".label." Good: ".good." Warn: ".warn." Error: ".error | append [| makeresults format=csv data="queue_name,current_depth BAR_Q,1 BAZ_R,2" | bin _time span=10m | stats avg(current_depth) as label by queue_name | rename queue_name as to | eval from="external", label="Avg: ".label | appendpipe [ stats values(to) as from | mvexpand from | eval to="internal" ]] good, error, and warn are special fields supported by the visualization. Add the label field to provide a custom link label, and leave the special fields intact to produce the flowing dot animation.
I'm trying to understand how to update the severity of a notable event when a new event arrives with a normal severity.  I'm feeding external alerts into ITSI and a correlation search turns it into a... See more...
I'm trying to understand how to update the severity of a notable event when a new event arrives with a normal severity.  I'm feeding external alerts into ITSI and a correlation search turns it into a notable event.  I'm using a specific ID for the "Notable Event Identifier Fields".  These alerts correctly turn into notable events and placed into an episode.  When the same alert comes into ITSI, but with a "Normal"\2 severity, I expect it to change the severity of the prior notable event in the episode.  Instead, it will treat it like a new notable event and put it into the same episode.  I thought ITSI uses the Notable Event Identifier Fields to determine if two events are the same or not.  I checked that both the original event and the "clearing" event have the exact same event_identifier_hash, so why does ITSI treat it like an additional alert\event in the episode?  Instead of having one normal\clear event in the episode, I now have one critical and one normal. How are you supposed to update the status of an alert\notable event in an episode when a clearing event is received?
Don't believe it will work for Splunk Cloud trials.  Docs:  https://docs.splunk.com/observability/en/logs/intro-logconnect.html Region and version availability Splunk Log Observer Connect is avail... See more...
Don't believe it will work for Splunk Cloud trials.  Docs:  https://docs.splunk.com/observability/en/logs/intro-logconnect.html Region and version availability Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials.
Okay let me back up.  One sourcetype contains the correlation logs with src_ip as it's primary identifier.  the other sourcetype is our threat logs where we see far more data about destination, url, ... See more...
Okay let me back up.  One sourcetype contains the correlation logs with src_ip as it's primary identifier.  the other sourcetype is our threat logs where we see far more data about destination, url, app, etc.  I want to create a search that takes the IPs from the correlation logs and looks for the same src_ip in the threat logs within a range of 1-2 hours and returns a detailed table describing what could have caused the correlation event to be created. Is this possible to do without using an outputlookup?   Also this index has a datamodel that I could leverage where nodenames are log.threat and log.correlation  
Here is a picture of my results. Hoping to get some help into having the second column populate urlrulelabel, apprulelabel, and rulelabel policies rather than just one.
It’s not giving the expected result. This is a lot better than a phrase we hear too often: "It doesn't work." This said, what is the expected result?  To ask an answerable data analytics questi... See more...
It’s not giving the expected result. This is a lot better than a phrase we hear too often: "It doesn't work." This said, what is the expected result?  To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Lol almost there, but a million miles away. I attempted something similar, but didn't fair well. Thanks a million.  Still working through a few new modules, but learning more each day. 
This is a little confusing.  You are almost there: index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales count as UnitsSold Is there something else we need... See more...
This is a little confusing.  You are almost there: index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales count as UnitsSold Is there something else we need to know?
Yes, such use cases are quite common, simple, and it is not always appropriate to use lookup table.  In fact, correlation search is the most fundamental strength of Splunk.  Meanwhile, you do want to... See more...
Yes, such use cases are quite common, simple, and it is not always appropriate to use lookup table.  In fact, correlation search is the most fundamental strength of Splunk.  Meanwhile, you do want to consider whether it is appropriate to compare the two sourcetypes in the same time search period. This said, your final table is not very illustrative for the statement "make a table using fields from sourcetype B that do not exist in sourcetype A" because IP is nowhere in that table.  Mind-reading 1: I will insert src_ip into the table.  More critically, you did not illustrate what you mean exactly by "compare (IPs from sourcetype A) against a larger set of IPs".  In the end result, do you want to list IPs in sourcetype B that do not exist in sourcetype A?  Mind-reading 2: I will assume no on this. index=paloalto (sourcetype=sourcetype_B OR sourcetype=sourcetype_A) | stats values(field_A) as field_A values(field_B) as field_B values(field_C) as field_C values(sourcetype) as sourcetype by src_ip | where sourcetype == sourcetype_A | fields - sourcetype Here, the filter uses a side effect of Splunk's equality comparator on multivalue fields. (There are more semantically expressive alternatives but most people just use this shortcut.)
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  ... See more...
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  The Task:  Use timechart to calculate the sum of price as "DailySales" and all count all events as "UnitsSold". What I have so far:  index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales