All Topics

Top

All Topics

hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my... See more...
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  . I am using Join to combine the two log statements on "ORDERS" , but my splunk query not returning any results    
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, a... See more...
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, and message is read continuously. Below is the Splunk app result. Below is the table shown in the Studio. Below is the Splunk Query.   index="eqt-e2e" | spath suite_build_name | search suite_build_name="PAAS-InstantInk-Stage-Regression-Smooth-Transition" | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message!="Test was skipped" | spath suite_build_number | search suite_build_number="*" | where (if("*"="*", 1=1, like(author, "%*%"))) | where (if("*"="*", 1=1, like(message, "%*%"))) | spath suite_build_start_time | sort - suite_build_start_time | eval suite_build_time = strftime(strptime(suite_build_start_time, "%Y-%m-%d %H:%M:%S"), "%I:%M %p") | table suite_build_name, suite_build_number, suite_build_time, author, test_rail_name, message | rename suite_build_name AS "Pipeline Name", suite_build_number AS "Pipeline No.", suite_build_time AS "Pipline StartTime (UTC)", author AS "Test Author", test_rail_name AS "Test Name", message AS "Exception Message"   @ITWhisperer 
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "privat... See more...
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "private_key": "-----BEGIN PRIVATE KEY-----\<MY PRIVATE KEY>\n-----END PRIVATE KEY-----\n", "client_email": "<splunk>@<splunk>-<blabla>, "client_id": "<MY CLIENT ID>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<MY-SPLUNK-URL>" } The service-account was recognized by the Addon so I imagined that the connection was established correctly.  When later I created the first input to collect the logs (a PubSub), when I am searching for the "Projects" connected to this service-account (in the photo) it returns me the error "External handler failed with code '1' and output ''. see splunkd.log for stderr" Actually the stderr in splunkd gives no useful information (just a generic error), so I am blocked at the moment. I also downloaded the code from Google Cloud Platform Add-on but it is not an easy debugging process, I cannot find what is the actual query that the Addon performs when clicking on "Projects".  Someone have some idea on this error?  Thanks
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me wit... See more...
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me with any detailed documentation or steps followed to ingest the data successfully and any troubleshooting if needed? Don't know what syslog is for me... I am very new to Splunk and learning. Apologies if it is basic question. But seriously want to learn.
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results st... See more...
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results still do not match after ingestion. Please advise me on the proper settings and assist me in fixing this one. Raw events: 2024-11-07 18:45:00.035, ID="51706", IDEVENT="313032807", EVENTTS="2024-11-07 18:29:43.175", INSERTTS="2024-11-07 18:42:05.819", SOURCE="Shuttle.DiagnosticErrorInfoLogList.28722.csv", LOCATIONOFFSET="0", LOGTIME="2024-11-07 18:29:43.175", BLOCK="2", SECTION="A9.18", SIDE="-", LOCATIONREF="10918", ALARMID="20201", RECOVERABLE="False", SHUTTLEID="Shuttle_069", ALARM="20201", LOCATIONDIR="Front" Existing props setting: CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = [0-9]\-[0-9]+\-[0-9]+\s[0-9]+:[0-9]+:[0-9]+.\d+ NO_BINARY_CHECK = true category = Custom TIME_PREFIX = EVENTTS=" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC In the below screeshot still we can see _time is not properly extracted with the matching timestamp of the field name "EVENTTS".    
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values? ... See more...
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values?   [query] | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | timechart span=1h count by region
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_chann... See more...
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_channel" 2. "Invalid payload_size" 3. "Too many bytes_used" 4. "Message rejected. Received unexpected message of size" 5. "not a valid combined field name/value type for data received"   Other S2S streaming errors as well.   You should upgrade your HF/IHF/IUF/IDX instance (if using persistent queue ) to following patches. 9.4.0/9.3.2/9.2.4/9.1.7 and above. This patch also fixes all the known PQ related crashes and other PQ issues. 
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data corr... See more...
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data correctly for that. We have data with 10-15 fields in it and we are doing a transpose like the below.  What we are looking to accomplish is to display only the rows where the values are the same, or alternatively where they are different.   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 column    sys1    sys2 field2       a            b field3       10         10 field4       a           a field5       10         20 field6       c           c field7       20         20 field8       a           d field9      10         10 field10    20        10
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name... See more...
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name using props.conf/transforms.conf to have application_idx2 For unknown reason few sources are failing to lastchanceindex.   props.conf [source::/path/to/app_json.log] TRANSFORMS-app-idx1 = set_idx1_index transforms.conf [set_idx1_index] SOURCE_KEY = _MetaData:Index REGEX = app-idx1 DEST_KEY = _MetaData:Index FORMAT = application_idx2   Thanks for your help.    
I've tried to register for the SplunkWork+ training for veterans and after I verify with ID.me, I receive a message saying that my account is being configured, but then receive a "504 Gateway Timeout... See more...
I've tried to register for the SplunkWork+ training for veterans and after I verify with ID.me, I receive a message saying that my account is being configured, but then receive a "504 Gateway Timeout". Any ideas?   Thanks!
Register here ! This thread is for the Community Office Hours session on Splunk Threat Research Team: Security Content on Wed, Dec 11, 2024 at 1pm PT / 4pm ET.    This is an opportunity to directly... See more...
Register here ! This thread is for the Community Office Hours session on Splunk Threat Research Team: Security Content on Wed, Dec 11, 2024 at 1pm PT / 4pm ET.    This is an opportunity to directly ask members of the Splunk Threat Research Team your questions, such as... What are the latest security content updates from the Splunk Threat Research Team? What are the best practices for accessing, implementing, and using the team’s security content? What tips and tricks can help leverage Splunk Attack Range, Contentctl, and other resources developed by the Splunk Threat Research Team? Any other questions about the team’s content and resources!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!  
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to... See more...
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to count each failure by using these logs. I would greatly appreciate your help with write this search query.  I hope this makes sense.. Thank you, I would like to use the information from these logs, grouped by userSesnId Log #1:  msgDtlTxt: [Qxxx] - the quote is declined.  msgTxt: quote creation failed. polNbr: Qxxx Log #2 httpStatusCd: 400 Log #3 msgTxt: Request.   They all share the same userSesnId  userSesnId: 10e30ad92e844d So my results should look something like this: polNbr            msgDtlTxt                         msgTxt                              httpStatusCd             count Qxxx                Validation: UWing           quote creation failed     400                             1
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   in... See more...
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   index=asvservices authenticateByRedirectFinish (*) | join request_correlation_id [ search index= asvservices stepup_validate ("isMatchFound\\\":true") | spath "policy_metadata_policy_name" | search "policy_metadata_policy_name" = stepup_validate | fields "request_correlation_id" ] | spath "metadata_endpoint_service_name" | spath "protocol_response_detail" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | rename "protocol_response_detail" as response      
Few servers are hosting in private VPC which are not connected to organisation IT network    how can we onboard those Linux hosts 
In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll loo... See more...
In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to collect and monitor for database performance, health, and reliability. In this post, we’ll look at how to monitor MariaDB (and MySQL) so we can keep the data stores that back our business applications performant, healthy, and resilient.  How to get the metrics The first step to get metrics from MariaDB or MySQL (for the rest of this post, I’ll just mention MariaDB, but the same approach and receiver works for both) to Splunk Observability Cloud is to instrument the backend service(s) that are connected to your MariaDB(s). If you’re working with the Splunk Distribution of the OpenTelemetry Collector, you can follow the guided install docs. Here I’m using Docker Compose to set up my application, MariaDB service, and the Splunk Distribution of the OpenTelemetry Collector:   Next, we’ll need to configure the OpenTelemetry Collector with a receiver to collect telemetry data from our MariaDB and an exporter to export data to our backend observability platform. If you already have an OpenTelemetry Collector configuration file, you can add the following configurations to that file. Since I set up the Collector using Docker Compose, I needed to create an empty otel-collector-config.yaml file. The mysql receiver is the MariaDB-compatible Collector receiver, so we’ll add it under the receivers block along with our Splunk Observability Cloud exporter under the exporters block. Here’s what our complete configuration looks like:  As always, don’t forget to add the receivers and exporters to the service pipelines.  Done! We can now build, start, or restart our service and see our database metrics flow into our backend observability platform.  Visualizing the Data in Splunk Observability Cloud With our installation and configuration of the OpenTelemetry Collector complete, we can now visualize our data in our backend observability platform, Splunk Observability Cloud.  From within Application Performance Monitoring (APM), we can view all of our application services and get a comprehensive performance overview – including performance details around our MariaDB:  We can explore our Service Map to visualize our MariaDB instance and how it fits in with the other services in our application: And we can select our MariaDB instance to get deeper insight into performance metrics, requests, and errors:  If we scroll down in the Breakdown dropdown results on the right side of the screen, we can even get quick visibility into Database Query Performance, showing us how specific queries perform and seeing which queries are being executed:  Clicking into Database Query Performance takes us to a view that can be sorted by total query response time, queries in the 90th percentile of latency, or total requests so we can quickly isolate queries that might be impacting our services and our users: We can select specific queries from our Top Queries for more query detail, like the full query, requests & errors, latency, and query tags:  We can dig into specific traces related to high-latency database requests, letting us see how specific users were affected by database performance:  And also see right down into the span performance: And the db.statement for the trace: We can proactively use this information to further optimize our queries and improve user experience. You can also see that the database activity shows up in the overall trace waterfall, letting you get a full picture of how all components of the stack were involved in this transaction. But how can this help us in an incident? This is all helpful information and can guide us on our journeys to improve query performance and efficiency. But when our database connections fail, when our query errors spike, that’s when this data becomes critical to keeping our applications up and running.  When everything is running smoothly, our database over in APM might look something like this:  But when things start to fail, our Service Map highlights issues in red:  And we can dive into traces related to specific error spikes:  The stacktrace helps us get to the root cause of errors. In this case, we had an improperly specified connection string, and we can even see the exact line where an exception was thrown:  With quick, at-a-glance insight into service and database issues, we easily jumped into the code, restored our database connection issue, and got our service back up and running so our users could carry on with enjoying our application.  Wrap Up Monitoring the datastores that back our applications is critical for improved performance, resiliency, and user experience. We can easily configure the OpenTelemtry Collector to receive MariaDB telemetry data and export this data to a backend observability platform for visibility and proactive detection of anomalies that could impact end users. Want to try out exporting your MariaDB data to a backend observability platform? Try Splunk Observability Cloud free for 14 days!  Resources Database Monitoring: Basics & Introduction OpenTelemetry Collector Configuring Receivers Monitor Database Query Performance
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "field... See more...
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "fields" : { "name" : "test" , "values" : {}  }} Error is : "Error in handling indexed fields" Could anyone precise reason of the error ? "fields value" could not be empty ? I can't prevent it on the source. Best regards, David  
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with th... See more...
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with this is optionally just two sysnames, and only the rows where the values do not match. index=idx1 source="file1.log" sysname IN ("SYS1","SYS6")| table sysname value_name value_info | eval {sysname}=value_info | fields - sysname, value_info | stats values(*) as * by value_name   The data format is the below and there are a couple hundred value_names for each sysname with varying formats from integer values, to long strings sysname, value_name, value_info   The above query displays the data something like this value_name            SYS1                     SYS6 name1                       X                             Y name2                       A                             A name3                       B                             C
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   This is to check all leavers in service now   index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number   Unfortunately the Supporthub does not add the email in the description and only user names and surnames. So I would need to search the first queries 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "*(?<first>\S+) (?<last>\S+)*" | fields first last] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   Search one results identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname Search two will get all my tickets that was created for people leaving my company and will return results like this _time affect_dest active description dv_state number 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 So the only way of searching would by to search the second query's description field where first and last appear Expectations identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 jdoe john.doe@domain.com jdoeT1@domain.com jdoe@worker.com john doe 2024-11-11 12:46:55 STL Leaver true John Doe Offboarding on  - 31/12/2024 active INC02
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare... See more...
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match Condition-3 ABC=NO_Match XYZ=Match then output of ABC compare to XYZ is No_Match Condition-4 ABC=NO_Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair wit... See more...
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair with dobbel quotes around "value". I have plenty of other data that is not using DBConnect and they dont have dobbel quotes around value.   Maybe the quotes is there because im using DBConnect? Is it possible to index data from DBConnect without adding the quotes? When i try to searc the data in Splunk i just dont get any data. I think it may have to do with the dobbel quotes? I'm not sure. Here are the search string. The air_temp is defined in the Climate datamodel. The TA(air temperature) in the data is defined in props.conf with the right sourcetype TU_CLM_Time. | tstats avg(Climate.air_temp) as air_temp from datamodel="Climate" where sourcetype="TU_CLM_Time" host=TU_CLM_1 by host _time span=60m ```Fetching relevant fields from CLM sourcetype in CLM datamodel.```