All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have a Splunk UI that allows the users to export a certain set of the rows from a lookup. The caveat is that each row might produce multiple output rows with quite a complex logic. So, I wonder... See more...
We have a Splunk UI that allows the users to export a certain set of the rows from a lookup. The caveat is that each row might produce multiple output rows with quite a complex logic. So, I wonder whether to use the mv set of functions maybe coupled with the map command or to develop a python function to do it. Any suggestions?     
Hello, We're coming from another platform that we used to keep track of incidents.  In order to keep things more consistent after the transition, we'd like to be able to have the IDs of new cases st... See more...
Hello, We're coming from another platform that we used to keep track of incidents.  In order to keep things more consistent after the transition, we'd like to be able to have the IDs of new cases start at a higher number than the default of 1, if possible.  Is this something that can be done? I haven't seen anything in the settings or documentation. Thanks!
I have a time chart of count by field     | timechart count by field_name limit=0     I would like to divide each value in the statistics table by the mean of that field.  Current Outpu... See more...
I have a time chart of count by field     | timechart count by field_name limit=0     I would like to divide each value in the statistics table by the mean of that field.  Current Output: Time A B 1 1 4 2 2 5 3 3 6   Desired Output: Time A B 1 0.5 0.8 2 1 1 3 1.5 1.2   I can use a `foreach` to perform an operation on every column but I am having trouble configuring a subquery within that to calculate the mean and divide by it.
Hi All, Actually i need a help how to identify what are the fields that are capturing in description of notables. Since i have given $filename$ to my one of the PA UC but it is not capturing the va... See more...
Hi All, Actually i need a help how to identify what are the fields that are capturing in description of notables. Since i have given $filename$ to my one of the PA UC but it is not capturing the value in description. Actual field is filename only in logs. Please help me how to solve this.
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone pl... See more...
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone please suggest any Pre-build dashboards for EventHub data. Thanks,
I'm working on a query with the goal of determining the percentage rate of request/response event pairs that match by way of a common field value. The events share the same index, sourcetype and sour... See more...
I'm working on a query with the goal of determining the percentage rate of request/response event pairs that match by way of a common field value. The events share the same index, sourcetype and source; the field name, however, is different. The field in the request event is called "ID" while the field in the response event is called "InResponseTo". NOTE: The response event also contains a field called "ID" which should be ignored. Here is a sample of each type of event (note that these two events share the matching value "_907b4184-e85c-41f2-9a32-c1c735f01510")... Request:   Oct 26 17:32:29 ServerX knick.knack [10/26/22, 17:32:20:292 EDT] 00018baf id=00000000 om.tivoli.am.fim.trustserver.sts.utilities.IDMappingExtUtils I traceString INFO --saml20-access-policy-utilities.js - [axamf4EhUmaVjkwwd+akl10BbjbDS1vVg6YJhu2F2E8=]:[protocolContext.getAuthnRequest()] <samlp:AuthnRequest xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" AssertionConsumerServiceURL="https://xyz.com" Destination="https://foobar.com" ForceAuthn="true" ID="_907b4184-e85c-41f2-9a32-c1c735f01510"</samlp:Response>   Response:   Oct 26 17:32:29 ServerX knick.knack [10/26/22, 17:32:20:455 EDT] 00018baf id=00000000 .am.fim.saml20.types.SAML20HTTPPostBrowserResponseWriterImpl 1 getStringMessage Request XML message: <samlp:Response xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://dundermifflin.com" ID="FIMRSP_1635e05b-0184-1dc4-803d-f94b11a6d04e" InResponseTo="_907b4184-e85c-41f2-9a32-c1c735f01510"</samlp:Response>   The development flow: Look at all request and response events for a given timeframe. Sometimes, duplicate events show up...so they would need to be deduplicated. If the same value is found for both "ID" and "InResponseTo", that's considered a 100% success rate. Mark it as a match. The sample events provided above would be an example of this. If the same value is not found (i.e., there's a request event but no matching response event), that's considered a 0% success rate. Mark it as a non-match. Add up the number of matches and divide the sum by the total number of request events found. Multiply the quotient by 100 to determine the percentage rate. The results will be visualized using a simple line chart (timechart) showing the average percentage rate over time. I'm able to compare the overall number of requests against the overall number of responses (regardless of matching field values) to produce a percentage rate, as shown here:   index=zig sourcetype=zag samlp:AuthnRequest | timechart count as RequestCount | appendcols [ search index=zig sourcetype=zag samlp:Response | timechart count as ResponseCount ] | eval ResponseRequestRatio = round(((ResponseCount/RequestCount) * 100),2) | table _time, RequestCount, ResponseCount, ResponseRequestRatio   How would I modify this query to accommodate the additional rules? I'm guessing that some sort of count could be used to determine the percentage rate, with a "1" for matches and a "0" (zero) for non-matches.
Hi,  I understand that Zoom will add a new feature "Secret Token" to replace the "Verification Token", that will be retired in August 2023. The current Splunk Connect for Zoom Add-on will work wi... See more...
Hi,  I understand that Zoom will add a new feature "Secret Token" to replace the "Verification Token", that will be retired in August 2023. The current Splunk Connect for Zoom Add-on will work with this feature? or there is a new version scheduled that covers this? Thank you ! Luis Felipe
Hi Even if i have read some documentations, i have difficulty to understand the difference between macro and eventtype I use macro essentially for index + sourcetype agregation but is somebody ca... See more...
Hi Even if i have read some documentations, i have difficulty to understand the difference between macro and eventtype I use macro essentially for index + sourcetype agregation but is somebody can clearly explaining me the difference between macro and eventtype? Thanks
Dear community, I am using DB connect with Hive/Spark. I would like to know the benefits of using Livy Thrift server over Hive/Spark with DB connect. And if it is possible and it makes sense, what... See more...
Dear community, I am using DB connect with Hive/Spark. I would like to know the benefits of using Livy Thrift server over Hive/Spark with DB connect. And if it is possible and it makes sense, what would be the best way to configure DB connect to the Livy Thrift Server? Thanks a lot
Hi all, Wondering if it is possible to do 10 minute search from when you see an event instead of doing 10 minute windows such as "| bin _time span=10m as window" as this just looks at minutes from ... See more...
Hi all, Wondering if it is possible to do 10 minute search from when you see an event instead of doing 10 minute windows such as "| bin _time span=10m as window" as this just looks at minutes from the hour? 
I have the following events.I am trying to get all the events between START and END of a job (inclusive).For instance the below job 51060 has 3 events in between...I am trying to a transaction comman... See more...
I have the following events.I am trying to get all the events between START and END of a job (inclusive).For instance the below job 51060 has 3 events in between...I am trying to a transaction command but I dont get the events in between.I only get the start and end event | transaction startswith="START" endswith="END" 2022-10-21T23:26:14.075003-07:00 xyz slurm-epilog: END user=svc job=51060 2022-10-21T20:15:16.914838-07:00 xyz kernel: [52023.042550] NVRM: Xid (PCI:): 119, pid=16378, name=cache_mgr_main, Timeout waiting for RPC from GSP! Expected function 76 (GSP_RM_CONTROL) (0x20808513 0x598). 2022-10-21T20:13:46.890841-07:00 xyz kernel: [51933.011964] NVRM: Xid (PCI:): 119, pid=16378, name=cache_mgr_main, Timeout waiting for RPC from GSP! Expected function 76 (GSP_RM_CONTROL) (0x20808513 0x598). 2022-10-21T20:12:16.866833-07:00 xyz kernel: [51842.981401] NVRM: Xid (PCI:): 119, pid=16378, name=cache_mgr_main, Timeout waiting for RPC from GSP! Expected function 76 (GSP_RM_CONTROL) (0x20808513 0x598). 2022-10-21T19:17:25.710875-07:00 xyz slurm-prolog: START user=svc job=51060
Hi all I have the following query for a single value visualisation:     | stats count(eval(completedDate < TargetDate)) as completed, count(number) as total | eval percentage = round((complet... See more...
Hi all I have the following query for a single value visualisation:     | stats count(eval(completedDate < TargetDate)) as completed, count(number) as total | eval percentage = round((completed/total)*100, 2)." %"     The visualisation is the percentage. I need to change the colors to red and green based on the value of the percentage. This I managed to do. However, I need to change the color of the value to grey if total<5. Is this possible to do, or not? Thanks in advance.
Please help with regex to extract the first ip(highlighted red) only  2022-10-25T14:30:28.108+00:00 10.3.4.150 syslog-ng 14928 - [meta sequenceId="-2074435768"] Error processing log message: <14>1 ... See more...
Please help with regex to extract the first ip(highlighted red) only  2022-10-25T14:30:28.108+00:00 10.3.4.150 syslog-ng 14928 - [meta sequenceId="-2074435768"] Error processing log message: <14>1 2022-10-26T10:30:28.588005-04:00 RM-SU-SAM - - RemoteLogging>@< { "logVersion": "1.0", "category": "AUDIT", "timeStamp": "2022-10-26T14:29:43.439Z", "id": "K7pTSQoxfV7pvq3bO8PSehvilSt4yZxEiU9oGkasPx8=", "context": { "tenantId": "ZZNXA0OELD-STA", "originatingAddress": "104.205.81.157, 35.227.230.123, 130.211.2.118,172.30.9.68", "principalId": "opatel@mail.com", "sessionId": "c0r52fac-9fc3-42a1-8e48-492b31c72790", "globalAccessId": "10f31a5e-53b4-4bc8-9ec8-13bb6b670592", "applicationType": "SAML", "applicationName": "Splunk", "policyName": "Global Policy for STA" }, "details": { "type": "ACCESS_REQUEST", "state": "Accepted", "action": "auth", "credentials": [ { "type": "otp", "state": "Verified" } ] }
Hi together, I want to group my bar chart in 2 columns. In the pictures you can see how I get it done and how it should look. reached target   host=my_host index=my_index | sta... See more...
Hi together, I want to group my bar chart in 2 columns. In the pictures you can see how I get it done and how it should look. reached target   host=my_host index=my_index | stats count | join [ search host=my_host index=my_index result.value= -1 | stats count | rename count as "Tests_Failed"] | join type=left [ search host=my_host index=my_index result.value= 1 | stats count | rename count as "Tests_Passed"] | join [search host=my_host index=my_index | rename result.value as isFailed | eval isFailed=if(isFailed=-1, 0, 1) | stats sum(isFailed) as testedPositive by Produktionsnummer | stats count(eval(testedPositive!="0")) as io count(eval(testedPositive="0")) as nio] | table Typ-Nummer, Tests_Failed, Tests_Passed, nio, io | rename Typ-Nummer AS "Product Name & Number", Tests_Passed AS "Tests Passed", Tests_Failed AS "Tests Failed", percFailed as "Percentage of Tests Failed"     I want so split it in 2 stacked columns like on the picture (Tests_Failed, Tests_Passed) and (nio, io). Many thanks in advance!
With the Splunk Virus Total app, is a subscription to Virus Total required?
Good Morning, I'm installing php agent 22.8 on oracle linux server with php 7.3 and apache webserver. When I run the install.sh command with -s -i options I am getting the following error. Script is... See more...
Good Morning, I'm installing php agent 22.8 on oracle linux server with php 7.3 and apache webserver. When I run the install.sh command with -s -i options I am getting the following error. Script is finding version, extensions directory, ini directory. Does anyone know what that means or how to fix?  "Agent installation does not contain PHP extension for PHP 7.3" thx, M
Hi there, I have my Splunk Stream installed and configured on SearchHead. All dahsboards provide me with a lot of data, however "database" activity dashoard is empty. I have got an external postgres... See more...
Hi there, I have my Splunk Stream installed and configured on SearchHead. All dahsboards provide me with a lot of data, however "database" activity dashoard is empty. I have got an external postgres database and do not know how to connect to them. Thanks for help. regards, pawelF
Hi, I have the following query:   index = ABC | eval domain=mvindex(split(EMAIL_TXT, "@"), 1) | stats dc(EMAIL_TXT) AS Count_EmailAddress, values(domain) as domain values(EMAIL_TXT) as Email_A... See more...
Hi, I have the following query:   index = ABC | eval domain=mvindex(split(EMAIL_TXT, "@"), 1) | stats dc(EMAIL_TXT) AS Count_EmailAddress, values(domain) as domain values(EMAIL_TXT) as Email_Address, values(STRT_DTS) AS Start_Date by IP_ADDR | where Count_EmailAddress >1 | sort -Count_EmailAddress   I also have a lookup with a list of IPs: |inputlookup HighRiskDomain  How do I create a new field called "HighRiskIP" that would have with "Yes" or "No" values, depending on whether the IP for that given row matches an IP on the lookup?????
hai all, i am checking about list of services down based on a host using below search  index=ivz_unix* Service source="Unix:Service" status=stopped   what was the search query use to check li... See more...
hai all, i am checking about list of services down based on a host using below search  index=ivz_unix* Service source="Unix:Service" status=stopped   what was the search query use to check list of services down for a host based on host name. for example i want to see for a host these services are in stopped state. please help
I have a list of hosts in the lookup table. These values aren't static and gets updated dynamically every three months. Is it possible to update the lookup dynamically in the below mentioned two ways... See more...
I have a list of hosts in the lookup table. These values aren't static and gets updated dynamically every three months. Is it possible to update the lookup dynamically in the below mentioned two ways, without updating the values manually. 1. Old values needs to be replaced by new values 2. New values should be appended to the old values in the table Thanks!