All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Understand RDP Nesting RDP nesting refers to the practice of establishing multiple Remote Desktop Protocol (RDP) sessions within each other. This can indicate suspicious or unauthorized activity, pot... See more...
Understand RDP Nesting RDP nesting refers to the practice of establishing multiple Remote Desktop Protocol (RDP) sessions within each other. This can indicate suspicious or unauthorized activity, potentially used to hide malicious actions or bypass security measures.  
Is it only possible to yield results in the generate command? If I run the simple command below it only yields the "hello" message in the generate() function even though generate() calls generate2().... See more...
Is it only possible to yield results in the generate command? If I run the simple command below it only yields the "hello" message in the generate() function even though generate() calls generate2(). import sys, time from splunklib.searchcommands import \ dispatch, GeneratingCommand, Configuration, Option, validators @Configuration() class GenerateHelloCommand(GeneratingCommand): count = Option(require=True, validate=validators.Integer()) def generate2(self): yield {'_time': time.time(), 'event_no': 2, '_raw': "hello 2"} def generate(self): self.generate2() yield {'_time': time.time(), 'event_no': 1, '_raw': "hello"} dispatch(GenerateHelloCommand, sys.argv, sys.stdin, sys.stdout, __name__)  
Hi. Got some great help using subsearches to match against a directory (CSV or SQL) using a sub search (https://community.splunk.com/t5/Splunk-Search/What-is-the-fastest-way-to-use-a-lookup-or-matc... See more...
Hi. Got some great help using subsearches to match against a directory (CSV or SQL) using a sub search (https://community.splunk.com/t5/Splunk-Search/What-is-the-fastest-way-to-use-a-lookup-or-match-records-against/m-p/644173#M223131), however, in some cases, it could be hundreds of thousands of records so I'm hitting this 10k limit (sub search limit). So the question is What is a good way to match records against a secondary source, say a lookup file? I'm able to use lookup, of course, but feels like there might be a better way. To use a specific example, I have an index that has a phone number. I have a CSV file that puts those phone numbers in lists (like a directory). If want 'all numbers in TEST directory' and I know I'll have more than 10k rows, then I do this:     index=myindex more-criteria-to-try-and-reduce | lookup directory.csv number output list | search list="TEST"     While this works, it obviously runs pretty long and so just asking if a better way! Thank you!    
When we connect UF/HF with Deployment Server we can see the list of UF/HF under Forwarder Mgmt-> Clients on UI   Can we view the list of Search Head Members on Deployer UI??
Lately I've been creating a bunch of dashboards for our vulnerability management program.  I've been creating these dashboards with dynamic drill downs.  For example, one dashboard may show user a su... See more...
Lately I've been creating a bunch of dashboards for our vulnerability management program.  I've been creating these dashboards with dynamic drill downs.  For example, one dashboard may show user a summary view of top vulnerabilities broken down by severity and network segment, another could show historical trending of vulnerability by departments, another dashboard would compare the results from two vulnerability scans and show the differences, what's new and what's fixed.  All these dashboards allows users to drill down to view more details.   A common theme I'm finding is that eventually the drill downs will lead the users to some elemental information such as CVE, Host, or Scan.  So in each dashboard, I would create inline searches to display these info.  For example, as users perform their drill-downs, they may arrive at a table that displays Host information, as the user click on the drill-down, a host_id token is created and passed to the inline search which uses $host_id$ as part of the query.  For each dashboard that I drill to Host Info, I would have to repeat this using inline searches with embedded $host_id$ token.  This results in many dashboards that uses basically the same search string with variable token values. Is there a way to create a saved search that will allow tokens to be passed into the saved search, for example, host_id, starttime, endtime, so I don't have to create inline searches all over the place and if I decide the search needs to be updated, I don't have to track down every dashboard and update all inline searches?
I have a search that gets the top users over a long periods of time . It also displays the most common field X value which can be any value. So it would be something like: index=some_index | stats c... See more...
I have a search that gets the top users over a long periods of time . It also displays the most common field X value which can be any value. So it would be something like: index=some_index | stats count mode(field_X) by user | sort - count | head 10 That takes 30 seconds for 5 million events for 1 day of data. I want to run this for longer periods of time like a month or even longer. Is the best method to increase performance to just summary index the above example but just removing the top 10 part? 
We recently installed the Opentelemetry collector for kubernetes in a cluster (https://docs.splunk.com/Observability/gdi/opentelemetry/install-k8s.html), and I see the logs coming into Splunk.  I am ... See more...
We recently installed the Opentelemetry collector for kubernetes in a cluster (https://docs.splunk.com/Observability/gdi/opentelemetry/install-k8s.html), and I see the logs coming into Splunk.  I am curious if there is an app that I can install to better format the logs.   I can put something together on my own, but if there's something pre-built, I would rather save the time and use that.  I did search through the apps, but I only saw some product specific ones (eg. stackrox, outcold, sysdig), but nothing for the opentelemetry collector. One annoyance in particular I'm looking to clean up is the time stamps are way off (UTC vs local translation issue is my guess.)
I'm a Splunk PS admin working at a client site and I wanted to post a challenge and resolution that we encountered. Problem: Client reported missing knowledge objects in a custom app private area;... See more...
I'm a Splunk PS admin working at a client site and I wanted to post a challenge and resolution that we encountered. Problem: Client reported missing knowledge objects in a custom app private area; they expected ~40 reports but only had ~17. The client last used the reports 7 days prior. Asked Splunk PS to investigate. Environment: 3 instance SHC Version 8.2.3, Linux >15 Indexers >50 users across the platform Troubleshooting Approach: Verified that the given Knowledge Objects (KO's) had not been deleted. Simple SPL search in index="_audit" for the app and verified last 10 days of logs. No suggestion or evidence of deletion. Via CLI set path to the given custom app, listed out objects in savedsearches.conf, count was 17 cat savedsearches.conf | grep "\[" -P | wc Changed SH to alternative member, repeated commands, count was 44. Verified the 3rd member also where the count was 44. Conclusion, the member with 17 savedsearches was clearly out of sync and did not have all recent KO's. Checked the Captaincy ./splunk show shclusters-status --verbose all appeared correct. The member with limited objects was the current captain, out_of_sync_node : 0 on all three instances in the cluster. Remediation: Verified the Monitoring Console, no alerts listed, health check issues or evidence of errors. Created a backup of this users savedsearches.conf (on one instance) cp savedsearches.conf savedsearches.bak Following the Splunk Docs SHC: perform a manual resync we moved the captain to an instance with the correct number of KO's ./splunk transfer shcluster-captain -mgmt_uri https://<server>:8089 Carefully issued the destructive command onto the out-of-sync instance: ./splunk resync shcluster-replicated-config Repeated this for the second SHC member Repeated checks all three members now in-sync Post works: We were unable to locate a release notes item that suggests this is a bug. There had previously been a period of downtime for the out-of-sync member, its Splunk daemon had stopped following a push from the Deployer. Still no alerts in the MC, nor logs per the docs to indicate e.g. Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member. Conclusions: The cluster was silently Out-of-Sync Many KO's across multiple apps would have been affected Follow the Splunk Docs Recommend to client to upgrade to latest version 9.x. 
I am developing a Splunk SOAR app that retrieve a json from our backend and ingest it into a container in splunk soar. However, I need to show some fields that are not included in the container schem... See more...
I am developing a Splunk SOAR app that retrieve a json from our backend and ingest it into a container in splunk soar. However, I need to show some fields that are not included in the container schema and i want those custom fields to be deployed with my app. Therefore my question, Is it possible to add custom fields to a splunk phantom container schema programmatically so our customers do not need to create them manually in the Splunk SOAR user interface?
Hi We have lot of alert where we need to change alert.email.to recipients to new one. Those alerts are in SHC and those are done within years directly with GUI. So I cannot manually edit those file... See more...
Hi We have lot of alert where we need to change alert.email.to recipients to new one. Those alerts are in SHC and those are done within years directly with GUI. So I cannot manually edit those files on OS level and I don't want to redistributed those with Deployer unless there haven't been any other option. Basically I can change that, but the issue is that it change hiddenly some other attributes which I cannot set with REST POST method. There seems to be at least own old unanswered questions already somehow touching this issue: https://community.splunk.com/t5/Splunk-Enterprise/Changed-save-searches-alert-cron-schedule-with-rest-api-bash/m-p/559595 What I have done:     | rest /servicesNS/-/-/saved/searches | search disabled = 0 AND action.email = 1 AND is_scheduled = 1 | search action.email.to = "*<an old email>*" | search title = "*SPLUNK:Alarm testing Clone*" | rename eai:acl.owner as acl_owner, eai:acl.app as acl_app, eai:acl.sharing as acl_sharing | eval URL1 = replace(replace(title, " ", "%20"),":", "%3A") | eval URL = "curl -ku $PASS -X POST \"https://localhost:8089/servicesNS/" + acl_owner + "/" + acl_app + "/saved/searches/" + URL1 + "\" -d action.email.to=\"<the new email>\"" | fields URL     This gives to me a shell command to run it for that individual alert ($PASS contains user:pass pair). When I run that      curl -vku $PASS -X POST "https://localhost:8089/servicesNS/<user>/alerts_splunk/saved/searches/SPLUNK%3AAlarm%20testing%20Clone -d action.email.to="f.s@some.domain"      It runs as expected, but when I do this query     | rest /servicesNS/-/-/saved/searches splunk_server=splunk-shc* | search NOT eai:acl.app IN (splunk_instrumentation splunk_rapid_diag splunk_archiver splunk_monitoring_console splunk_app_db_connect splunk_app_aws Splunk_TA_aws SplunkAdmins Splunk_ML_Toolkit trackme) | rename "alert.track" as alert_track | eval type=case(alert_track=1, "alert", (isnotnull(actions) AND actions!="") AND (isnotnull(alert_threshold) AND alert_threshold!=""), "alert", (isnotnull(alert_comparator) AND alert_comparator!="") AND (isnotnull(alert_type) AND alert_type!="always"), "alert", 1==1, "report") | fields title type eai:acl.app is_scheduled description search disabled triggered_alert_count actions action.script.filename alert.severity cron_schedule disabled splunk_server * | search title = "SPLUNK:alarm testing Clone" | sort eai:acl.app title splunk_server | fields eai:acl.app title splunk_server type * | search splunk_server = "*-b-*" | transpose | where 'row 1' != 'row 2'     I got that instead of changed action.email.to I have private Report with that new action.email.to field! It has  eai:acl.sharing as private and is_scheduled = 0 instead of 1. Basically that means that now I have a new private report instead of updated alert! Any hints / advised, how to do this with rest will take thankfully! r. Ismo      
WARNING: can't open config file: C:\\gitlab_runner\\builds\\build_home\\splunk/ssl/openssl.cnf So why is the default location of openssl.cnf not %SPLUNK_HOME% ?     C:\Splunk>%SPLUNK_HOME%\bi... See more...
WARNING: can't open config file: C:\\gitlab_runner\\builds\\build_home\\splunk/ssl/openssl.cnf So why is the default location of openssl.cnf not %SPLUNK_HOME% ?     C:\Splunk>%SPLUNK_HOME%\bin\openssl version -d OPENSSLDIR: "C:\\gitlab_runner\\builds\\build_home\\splunk/ssl"      Do I need to create the above "gitlab_runner" directory structure to suppress the warnings? (Using Splunk Enterprise Windows  v8.2.10)
After reading the doc and this thread  I still have three doubts in my mind: can I integrate Appd in SNOW only for the event management portion?(or do I have to make the full integration CMDB & Eve... See more...
After reading the doc and this thread  I still have three doubts in my mind: can I integrate Appd in SNOW only for the event management portion?(or do I have to make the full integration CMDB & Events since one cannot go with the other?) Will the event management integration create incidents out of the box? If I am on AppD saas will I still need to deploy the middle tier component called  Data Sync Utility? Thanks for clarifying, regards Yann
I have log lines like these: 2023/06/09 13:19:31.245 : AUDIT- INFO: Adding profile with id 00001 to TPT 2023/06/09 13:19:32.245 : AUDIT- INFO: Adding profile with id 00002 to TPT 2023/06/09 13:19... See more...
I have log lines like these: 2023/06/09 13:19:31.245 : AUDIT- INFO: Adding profile with id 00001 to TPT 2023/06/09 13:19:32.245 : AUDIT- INFO: Adding profile with id 00002 to TPT 2023/06/09 13:19:33.326 : Will stop adding profiles from id 00003 as maximum size has been exceeded 2023/06/09 13:19:34.245 : AUDIT- INFO: Adding profile with id 00003 to TPT 2023/06/09 13:19:34.245 : AUDIT- INFO: Adding profile with id 00003 to TPT 2023/06/09 13:19:35.245 : AUDIT- INFO: Adding profile with id 00004 to TPT 2023/06/09 13:19:36.326 : Will stop adding profiles from id 00005 as maximum size has been exceeded 2023/06/09 13:19:37.240 : AUDIT- INFO: Adding profile with id 00005 to TPT 2023/06/09 13:19:37.245 : AUDIT- INFO: Adding profile with id 00006 to TPT 2023/06/09 13:19:38.245 : AUDIT- INFO: Adding profile with id 00007 to TPT 2023/06/09 13:19:39.245 : AUDIT- INFO: Adding profile with id 00008 to TPT 2023/06/09 13:19:40.245 : AUDIT- INFO: Adding profile with id 00009 to TPT 2023/06/09 13:19:41.245 : AUDIT- INFO: Adding profile with id 00010 to TPT 2023/06/09 13:19:42.326 : Will stop adding profiles from id 00011 as maximum size has been exceeded 2023/06/09 13:19:43.245 : AUDIT- INFO: Adding profile with id 00011 to TPT 2023/06/09 13:19:44.245 : AUDIT- INFO: Adding profile with id 00012 to TPT 2023/06/09 13:19:45.245 : AUDIT- INFO: Adding profile with id 00013 to TPT 2023/06/09 13:19:46.245 : AUDIT- INFO: Adding profile with id 00014 to TPT   I want to group the events starting from "Adding profile with ID" and completing the group with "will stop adding profiles", and all messages in one group should be visible...so that i have 3 groups in total, and then the last 4 messages should not be a part of any group ( as their group has not completed yet) The results should look something like this: Group 1: profiles total:2 completed Group 2: profiles total:2 completed Group 3: profiles total:6 completed Group 4: profiles total:4 -
We use Splunk On-call and instead of exporting the on-call schedule to a calendar we want to export to a CSV for our scheduling system.  Is that possible?  If so how?
Hi guys,   Looking for help framing a query for the following scenario: index=index  "designated field"   Events show the that there are multiple values for the field  (these are log messag... See more...
Hi guys,   Looking for help framing a query for the following scenario: index=index  "designated field"   Events show the that there are multiple values for the field  (these are log message types): Type1  Type2 Type3 .... TypeN   Want to enumerate all of the fields that are associated with each: designated_field.TypeN (i.e. each log message type has sub-fields associated with each log message type.) So for Type1: Field1_Type1 Field2_Type1 Field3_Type1   for Type2: Field1_Type1 Field2_Type2   etc.   ======================================   So I am imagining my query goes like this:   index=index1 designated_field | <enumerate each of the values in designated_field> | <pull our the field names for each of the values that were enumerated> | <form a table with a column listing the values and then a second column showing all of the field names associated with each value>      
Hi All, We use Splunk DB Connect 3.10.0 for Oracle connections. Weirdly one Connection stopped working (after a password reset ) It is giving an error " ORA-28000: The account is locked. " We ... See more...
Hi All, We use Splunk DB Connect 3.10.0 for Oracle connections. Weirdly one Connection stopped working (after a password reset ) It is giving an error " ORA-28000: The account is locked. " We have tried to remove the connection/identity and create one from scratch but to no avail. The Database guys are saying that the account associated with the identity is just fine and they are not getting any bad hits as well. that account worked fine when I tried it via SQL explorer on my laptop Below is log from splunk_app_db_connect_serverlog-- [dw-37 - POST /api/connections/status] ERROR c.s.d.s.a.s.d.impl.DatabaseMetadataServiceImpl - action=error_in_validating_connection connection=ConnectionConf{connectionType='oracle', defaultDatabase='xxxxxx', host='yyyyyyyyy', jdbcUrlFormat='jdbc:oracle:thin:@//<host>:<port>/<database>', jdbcUrlSSLFormat='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<database>)))', jdbcUseSSL=false, port=1521, serviceClass='com.splunk.dbx2.OracleJDBC', jdbcDriverClass='oracle.jdbc.OracleDriver', testQuery='null', isolationLevel='null', readonly=true, identityName='XYZ_PROD', identity=DotConfBase{title='XYZ_PROD', disabled=false}, useConnectionPool=false, fetchSize=100, maxConnLifetimeMillis=1800000, maxWaitMillis=30000, minIdle=null, maxTotalConn=8, idleTimeout=null, timezone=null, connectionProperties='{}'} java.sql.SQLException: ORA-28000: The account is locked. Any help would be appreciated. Thanks, Neerav
hello everyone,   my event data looks like this       {\"status\":1,\"httpStatus\":200,\"event\":\"getBooks\"}       My goal is to extract httpStatus as a field so I can filter ev... See more...
hello everyone,   my event data looks like this       {\"status\":1,\"httpStatus\":200,\"event\":\"getBooks\"}       My goal is to extract httpStatus as a field so I can filter events by their codes(e.g 200, 400 ..)   I learned that we need to escape backslashes and double quotes but the command below didn't work       | rex "httpStatus\\\":(?<http_status>\d+)"       What did i do incorrectly here?
Hi, In Splunk cloud environment we are using Splunk DDAA to archive the data for 1 year(DDAA retention period). But when i try to restore the data more than 1 year still its restoring.Can someone s... See more...
Hi, In Splunk cloud environment we are using Splunk DDAA to archive the data for 1 year(DDAA retention period). But when i try to restore the data more than 1 year still its restoring.Can someone suggest why its restoring more than a year. Also suggest how much storage is required for DDAA. If I'm ingesting 50 GB per day and DDAA retention period is 1 year. Thanks!  
Hi, During maintenance or out of business hours or operating hours, you may want to stop or pause the checks running on this application as it's pointless to test their availability during these per... See more...
Hi, During maintenance or out of business hours or operating hours, you may want to stop or pause the checks running on this application as it's pointless to test their availability during these periods, it consume checks/ressources, and it impacts the alerting and reportings. -> https://dev.splunk.com/observability/search/?q=synthetics Doing via API this way might also make it easier and faster than from the synthetic advanced set up interface. Would you share your experience on this and the Pros/Cons you've seen ? tx!    
Hi All, i am using below query to get the common results on the basis of correlation_id but it is very slow,I need to optimize it to get the proper results     index=kong_fincrimekyc_prod |re... See more...
Hi All, i am using below query to get the common results on the basis of correlation_id but it is very slow,I need to optimize it to get the proper results     index=kong_fincrimekyc_prod |rename request.headers.x-int-clientapplication as "client" correlation-id as "correlation_id" |table Error_Reason,correlation_id,client,upstream_uri|where isnotnull(client) |join type=outer correlation_id [ search index=fincrimekyc_prod source="prod-ms-vix-adapter" sourcetype=kyc_app_logs "com.nab.ms.vix.adapter.exception.VixExceptionHandler" "ERROR" OR "Exception" | rex "Caused by(?<Error_Reason>.*?)\\\n" | rex "correlation_id=\\\"(?<correlation_id>.*?)\\\"\,"|table Error_Reason,correlation_id] |stats values(Error_Reason) as "Error_Reason" values(client) values(correlation_id) by upstream_uri|where isnotnull(Error_Reason)     Please help on this Thanks