All Topics

Top

All Topics

Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is H... See more...
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is HDD for cold buckets. I'm trying to configure Splunk such at that an index ("test-index") will only consume, say, 10 MB of the SSD volume. After it hits that threshold, the oldest hot/warm bucket should roll over to the slower HDD volume. I've done various tests, but when the index's 10 MB SSD threshold is reached, all of the buckets are rolled over to the cold storage, leaving SSD empty. Here is how indexes.conf is set now:   [volume:hot_buckets] path = /srv/ssd maxVolumeDataSizeMB = 430000 [volume:cold_buckets] path = /srv/hdd maxVolumeDataSizeMB = 11000000 [test-index] homePath = volume:hot_buckets/test-index/db coldPath = volume:cold_buckets/test-index/colddb thawedPath = /srv/hdd/test-index/thaweddb homePath.maxDataSizeMB = 10   When the 10 MB threshold is reached, why is everything in hot/warm rolling over to cold storage? I had expected 10 MB of data to remain in hot/warm, with only the older buckets rolling over to cold. I've poked around and found a other articles related to maxDataSizeMB, but those questions do not align with what I'm experiencing. Any guidance is appreciated. Thank you!
In October, the Splunk Threat Research Team had one release of new security content via the Enterprise Security Content Update (ESCU) app (v4.42.0). With this release, there are 10 new analytics, 15 ... See more...
In October, the Splunk Threat Research Team had one release of new security content via the Enterprise Security Content Update (ESCU) app (v4.42.0). With this release, there are 10 new analytics, 15 updated analytics, and 1 updated analytic story now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: The CISA AA24-241A analytic story was updated with detections tailored to identify malicious usage of PowerShell Web Access in Windows environments. The new detections focus on monitoring PowerShell Web Access activity through the IIS application pool and web access logs, providing enhanced visibility into suspicious or unauthorized access. The Splunk Threat Research Team also updated the security content repository on research.splunk.com to better help security teams find the most relevant content for their organizations, understand how individual detections operate, and stay up-to-date on the latest releases. For more details, check out this blog: Fueling the SOC of the Future with Built-in Threat Research and Detections in Splunk Enterprise Security. New Analytics (10) Splunk Disable KVStore via CSRF Enabling Maintenance Mode Splunk Image File Disclosure via PDF Export in Classic Dashboard Splunk Low-Priv Search as nobody SplunkDeploymentServerConfig App Splunk Persistent XSS via Props Conf Splunk Persistent XSS via Scheduled Views Splunk RCE Through Arbitrary File Write to Windows System Root Splunk SG Information Disclosure for Low Privs User Splunk Sensitive Information Disclosure in DEBUG Logging Channels Windows IIS Server PSWA Console Access Windows Identify PowerShell Web Access IIS Pool Updated Analytics (15) Create Remote Thread into LSASS Detect Regsvcs with Network Connection Linux Auditd Change File Owner To Root Possible Lateral Movement PowerShell Spawn Suspicious Process DNS Query Known Abuse Web Services Windows AdFind Exe Windows DISM Install PowerShell Web Access Windows Enable PowerShell Web Access Windows Impair Defenses Disable AV AutoStart via Registry Windows Modify Registry Utilize ProgIDs Windows Modify Registry ValleyRAT C2 Config Windows Modify Registry ValleyRat PWN Reg Entry Windows Privileged Group Modification Windows Scheduled Task DLL Module Loaded Windows Scheduled Tasks for CompMgmtLauncher or Eventvwr Updated Analytic Stories (1) CISA AA24-241A The team also published the following 4 blogs: ValleyRAT Insights: Tactics, Techniques, and Detection Methods Introducing Splunk Attack Range v3.1 PowerShell Web Access: Your Network's Backdoor in Plain Sight My CUPS Runneth Over (with CVEs) For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
I have an SPLQ that im trying to collect all domains from a raw logs, but my regex is capturing only one domain. in a single event, some events have one url some of them have 20 and more, how do i c... See more...
I have an SPLQ that im trying to collect all domains from a raw logs, but my regex is capturing only one domain. in a single event, some events have one url some of them have 20 and more, how do i capture all domains, please advice? SPLQ .............. | rex field=_raw "(?<domain>\w+\.\w+)\/" | rex field=MessageURLs "\b(?<domain2>(?:http?://|www\.)(?:[0-9a-z-]+\.)+[a-z]{2,63})/?" | fillnull value=n/a | stats count by domain domain2 MessageURLs _raw
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my... See more...
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  . I am using Join to combine the two log statements on "ORDERS" , but my splunk query not returning any results    
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, a... See more...
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, and message is read continuously. Below is the Splunk app result. Below is the table shown in the Studio. Below is the Splunk Query.   index="eqt-e2e" | spath suite_build_name | search suite_build_name="PAAS-InstantInk-Stage-Regression-Smooth-Transition" | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message!="Test was skipped" | spath suite_build_number | search suite_build_number="*" | where (if("*"="*", 1=1, like(author, "%*%"))) | where (if("*"="*", 1=1, like(message, "%*%"))) | spath suite_build_start_time | sort - suite_build_start_time | eval suite_build_time = strftime(strptime(suite_build_start_time, "%Y-%m-%d %H:%M:%S"), "%I:%M %p") | table suite_build_name, suite_build_number, suite_build_time, author, test_rail_name, message | rename suite_build_name AS "Pipeline Name", suite_build_number AS "Pipeline No.", suite_build_time AS "Pipline StartTime (UTC)", author AS "Test Author", test_rail_name AS "Test Name", message AS "Exception Message"   @ITWhisperer 
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "privat... See more...
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "private_key": "-----BEGIN PRIVATE KEY-----\<MY PRIVATE KEY>\n-----END PRIVATE KEY-----\n", "client_email": "<splunk>@<splunk>-<blabla>, "client_id": "<MY CLIENT ID>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<MY-SPLUNK-URL>" } The service-account was recognized by the Addon so I imagined that the connection was established correctly.  When later I created the first input to collect the logs (a PubSub), when I am searching for the "Projects" connected to this service-account (in the photo) it returns me the error "External handler failed with code '1' and output ''. see splunkd.log for stderr" Actually the stderr in splunkd gives no useful information (just a generic error), so I am blocked at the moment. I also downloaded the code from Google Cloud Platform Add-on but it is not an easy debugging process, I cannot find what is the actual query that the Addon performs when clicking on "Projects".  Someone have some idea on this error?  Thanks
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me wit... See more...
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me with any detailed documentation or steps followed to ingest the data successfully and any troubleshooting if needed? Don't know what syslog is for me... I am very new to Splunk and learning. Apologies if it is basic question. But seriously want to learn.
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results st... See more...
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results still do not match after ingestion. Please advise me on the proper settings and assist me in fixing this one. Raw events: 2024-11-07 18:45:00.035, ID="51706", IDEVENT="313032807", EVENTTS="2024-11-07 18:29:43.175", INSERTTS="2024-11-07 18:42:05.819", SOURCE="Shuttle.DiagnosticErrorInfoLogList.28722.csv", LOCATIONOFFSET="0", LOGTIME="2024-11-07 18:29:43.175", BLOCK="2", SECTION="A9.18", SIDE="-", LOCATIONREF="10918", ALARMID="20201", RECOVERABLE="False", SHUTTLEID="Shuttle_069", ALARM="20201", LOCATIONDIR="Front" Existing props setting: CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = [0-9]\-[0-9]+\-[0-9]+\s[0-9]+:[0-9]+:[0-9]+.\d+ NO_BINARY_CHECK = true category = Custom TIME_PREFIX = EVENTTS=" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC In the below screeshot still we can see _time is not properly extracted with the matching timestamp of the field name "EVENTTS".    
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values? ... See more...
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values?   [query] | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | timechart span=1h count by region
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_chann... See more...
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_channel" 2. "Invalid payload_size" 3. "Too many bytes_used" 4. "Message rejected. Received unexpected message of size" 5. "not a valid combined field name/value type for data received"   Other S2S streaming errors as well.   You should upgrade your HF/IHF/IUF/IDX instance (if using persistent queue ) to following patches. 9.4.0/9.3.2/9.2.4/9.1.7 and above. This patch also fixes all the known PQ related crashes and other PQ issues. 
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data corr... See more...
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data correctly for that. We have data with 10-15 fields in it and we are doing a transpose like the below.  What we are looking to accomplish is to display only the rows where the values are the same, or alternatively where they are different.   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 column    sys1    sys2 field2       a            b field3       10         10 field4       a           a field5       10         20 field6       c           c field7       20         20 field8       a           d field9      10         10 field10    20        10
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name... See more...
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name using props.conf/transforms.conf to have application_idx2 For unknown reason few sources are failing to lastchanceindex.   props.conf [source::/path/to/app_json.log] TRANSFORMS-app-idx1 = set_idx1_index transforms.conf [set_idx1_index] SOURCE_KEY = _MetaData:Index REGEX = app-idx1 DEST_KEY = _MetaData:Index FORMAT = application_idx2   Thanks for your help.    
I've tried to register for the SplunkWork+ training for veterans and after I verify with ID.me, I receive a message saying that my account is being configured, but then receive a "504 Gateway Timeout... See more...
I've tried to register for the SplunkWork+ training for veterans and after I verify with ID.me, I receive a message saying that my account is being configured, but then receive a "504 Gateway Timeout". Any ideas?   Thanks!
Register here ! This thread is for the Community Office Hours session on Splunk Threat Research Team: Security Content on Wed, Dec 11, 2024 at 1pm PT / 4pm ET.    This is an opportunity to directly... See more...
Register here ! This thread is for the Community Office Hours session on Splunk Threat Research Team: Security Content on Wed, Dec 11, 2024 at 1pm PT / 4pm ET.    This is an opportunity to directly ask members of the Splunk Threat Research Team your questions, such as... What are the latest security content updates from the Splunk Threat Research Team? What are the best practices for accessing, implementing, and using the team’s security content? What tips and tricks can help leverage Splunk Attack Range, Contentctl, and other resources developed by the Splunk Threat Research Team? Any other questions about the team’s content and resources!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!  
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to... See more...
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to count each failure by using these logs. I would greatly appreciate your help with write this search query.  I hope this makes sense.. Thank you, I would like to use the information from these logs, grouped by userSesnId Log #1:  msgDtlTxt: [Qxxx] - the quote is declined.  msgTxt: quote creation failed. polNbr: Qxxx Log #2 httpStatusCd: 400 Log #3 msgTxt: Request.   They all share the same userSesnId  userSesnId: 10e30ad92e844d So my results should look something like this: polNbr            msgDtlTxt                         msgTxt                              httpStatusCd             count Qxxx                Validation: UWing           quote creation failed     400                             1
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   in... See more...
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   index=asvservices authenticateByRedirectFinish (*) | join request_correlation_id [ search index= asvservices stepup_validate ("isMatchFound\\\":true") | spath "policy_metadata_policy_name" | search "policy_metadata_policy_name" = stepup_validate | fields "request_correlation_id" ] | spath "metadata_endpoint_service_name" | spath "protocol_response_detail" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | rename "protocol_response_detail" as response      
Few servers are hosting in private VPC which are not connected to organisation IT network    how can we onboard those Linux hosts 
In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll loo... See more...
In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll look at how to monitor MariaDB (and MySQL) so we can keep the data stores that back our business applications performant, healthy, and resilient.  How to get the metrics The first step to get metrics from MariaDB or MySQL (for the rest of this post, I’ll just mention MariaDB, but the same approach and receiver works for both) to Splunk Observability Cloud is to instrument the backend service(s) that are connected to your MariaDB(s). If you’re working with the Splunk Distribution of the OpenTelemetry Collector, you can follow the guided install docs. Here I’m using Docker Compose to set up my application, MariaDB service, and the Splunk Distribution of the OpenTelemetry Collector:   Next, we’ll need to configure the OpenTelemetry Collector with a receiver to collect telemetry data from our MariaDB and an exporter to export data to our backend observability platform. If you already have an OpenTelemetry Collector configuration file, you can add the following configurations to that file. Since I set up the Collector using Docker Compose, I needed to create an empty otel-collector-config.yaml file. The mysql receiver is the MariaDB-compatible Collector receiver, so we’ll add it under the receivers block along with our Splunk Observability Cloud exporter under the exporters block. Here’s what our complete configuration looks like:  As always, don’t forget to add the receivers and exporters to the service pipelines.  Done! We can now build, start, or restart our service and see our database metrics flow into our backend observability platform.  Visualizing the Data in Splunk Observability Cloud With our installation and configuration of the OpenTelemetry Collector complete, we can now visualize our data in our backend observability platform, Splunk Observability Cloud.  From within Application Performance Monitoring (APM), we can view all of our application services and get a comprehensive performance overview – including performance details around our MariaDB:  We can explore our Service Map to visualize our MariaDB instance and how it fits in with the other services in our application: And we can select our MariaDB instance to get deeper insight into performance metrics, requests, and errors:  If we scroll down in the Breakdown dropdown results on the right side of the screen, we can even get quick visibility into Database Query Performance, showing us how specific queries perform and seeing which queries are being executed:  Clicking into Database Query Performance takes us to a view that can be sorted by total query response time, queries in the 90th percentile of latency, or total requests so we can quickly isolate queries that might be impacting our services and our users: We can select specific queries from our Top Queries for more query detail, like the full query, requests & errors, latency, and query tags:  We can dig into specific traces related to high-latency database requests, letting us see how specific users were affected by database performance:  And also see right down into the span performance: And the db.statement for the trace: We can proactively use this information to further optimize our queries and improve user experience. You can also see that the database activity shows up in the overall trace waterfall, letting you get a full picture of how all components of the stack were involved in this transaction. But how can this help us in an incident? This is all helpful information and can guide us on our journeys to improve query performance and efficiency. But when our database connections fail, when our query errors spike, that’s when this data becomes critical to keeping our applications up and running.  When everything is running smoothly, our database over in APM might look something like this:  But when things start to fail, our Service Map highlights issues in red:  And we can dive into traces related to specific error spikes:  The stacktrace helps us get to the root cause of errors. In this case, we had an improperly specified connection string, and we can even see the exact line where an exception was thrown:  With quick, at-a-glance insight into service and database issues, we easily jumped into the code, restored our database connection issue, and got our service back up and running so our users could carry on with enjoying our application.  Wrap Up Monitoring the datastores that back our applications is critical for improved performance, resiliency, and user experience. We can easily configure the OpenTelemtry Collector to receive MariaDB telemetry data and export this data to a backend observability platform for visibility and proactive detection of anomalies that could impact end users. Want to try out exporting your MariaDB data to a backend observability platform? Try Splunk Observability Cloud free for 14 days!  Resources Database Monitoring: Basics & Introduction OpenTelemetry Collector Configuring Receivers Monitor Database Query Performance
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "field... See more...
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "fields" : { "name" : "test" , "values" : {}  }} Error is : "Error in handling indexed fields" Could anyone precise reason of the error ? "fields value" could not be empty ? I can't prevent it on the source. Best regards, David  
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with th... See more...
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with this is optionally just two sysnames, and only the rows where the values do not match. index=idx1 source="file1.log" sysname IN ("SYS1","SYS6")| table sysname value_name value_info | eval {sysname}=value_info | fields - sysname, value_info | stats values(*) as * by value_name   The data format is the below and there are a couple hundred value_names for each sysname with varying formats from integer values, to long strings sysname, value_name, value_info   The above query displays the data something like this value_name            SYS1                     SYS6 name1                       X                             Y name2                       A                             A name3                       B                             C