All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I've been trying to figure out the most efficient way to do this and a bit unclear on ingest-time vs automatic lookups or another way of this this.  This is a simple and probably a common use... See more...
Hello, I've been trying to figure out the most efficient way to do this and a bit unclear on ingest-time vs automatic lookups or another way of this this.  This is a simple and probably a common use case: Events are coming in with event_id code which is not friendly user.  I want to do a lookup at index time against the event_id code (integer) and add a field called event_id_desc with what that code resolves to in a lookup (e.g. event_id: 5, event_id_desc: user login).  What is the most efficient way of doing this?  There are 1500 static codes in the csv. Thanks!    
I have some automatic field extractions specified in Props.conf per below INDEXED_EXTRACTIONS=CSV HEADER_FIELD_LINE_NUMBER=1 They work great and they are extracting about 30 columns but about 5 ha... See more...
I have some automatic field extractions specified in Props.conf per below INDEXED_EXTRACTIONS=CSV HEADER_FIELD_LINE_NUMBER=1 They work great and they are extracting about 30 columns but about 5 have 2 words & I have to specify in each search to rename using one word. I dont really see any config place to just remove the space in the automatic extraction. Is there anyway to just make this change to these few extraction names in any config files?
Hello, I have SH cluster and I have Enterprise Security (ES). I would like migrate Enterprise  Security to second SH cluster. Can I do this? I think, yes:) but didn't find official instruction. Can ... See more...
Hello, I have SH cluster and I have Enterprise Security (ES). I would like migrate Enterprise  Security to second SH cluster. Can I do this? I think, yes:) but didn't find official instruction. Can you help me? Maybe, will be better migrate all APPs without ES? I am Looking forward for answer.  
Hi, Can anyone help how to solve java stack trace log entries going across multiple splunk timestamps.    
i just want to calculate the Passed Percentage of every date . i have the Passed Count as well as the Total devices. is there any logic to calculate the % for every date Dynamically , becau... See more...
i just want to calculate the Passed Percentage of every date . i have the Passed Count as well as the Total devices. is there any logic to calculate the % for every date Dynamically , because there are few more data to be added with a different date. Please help me , is there any possibility to do that . it would be appreciated, Thank you.
Hi All, I have a field with the following value: [ "842cef72-745d-463c-8b49-ce16ccc5ebd2" ] I'd like to get rid of the square brackets and the quotes ending up with: 842cef72-745d-463c-8b49-ce16c... See more...
Hi All, I have a field with the following value: [ "842cef72-745d-463c-8b49-ce16ccc5ebd2" ] I'd like to get rid of the square brackets and the quotes ending up with: 842cef72-745d-463c-8b49-ce16ccc5ebd2
Hi everybody, I need to insert inside my dashboard a button that makes a call to a URL, embedding in the string the values of some tokens that are generated by the inputs of the dashboard. In order... See more...
Hi everybody, I need to insert inside my dashboard a button that makes a call to a URL, embedding in the string the values of some tokens that are generated by the inputs of the dashboard. In order to do this I inserted in the XML code the following HTML lines:   <row> <panel> <html> <form action="heregoestheURL" target="_blank"> <input type="submit" value="Esporta"/> </form> </html> </panel> </row>     The code works fine, the HTML button is correctly visualised in the dashboard and it is opening correctly the new window in the browser when pushed, but I cannot find a way to dynamically pass to the URL the values of the tokens. Does anyone have any idea on how I could implement this? Any help is really appreciated. Thanks! Enrico
Hello , I am trying to get the sales report for 3 months but the search results only gives the result for last 15 days. Results before 15 days are all zeros. the job notification shows : [pdx-nav-no... See more...
Hello , I am trying to get the sales report for 3 months but the search results only gives the result for last 15 days. Results before 15 days are all zeros. the job notification shows : [pdx-nav-non-prod-splunk-idx-240-132] Your search has been restricted to a time span of 1296000 seconds. i.e. 15 days. So my question is how can i get the full report for 3 months or how can i increase the time span of a search .
We have 3 different (Active,Closed,Resolved) records for same Incident and we need to retrieve only Active incident record and Incident shouldn't have any other status records such as Closed,Resolved... See more...
We have 3 different (Active,Closed,Resolved) records for same Incident and we need to retrieve only Active incident record and Incident shouldn't have any other status records such as Closed,Resolved. Below query is still showing Active Incident record, however Incident is already in resolved status...   index="snow" sourcetype="snow:incident" source="https://dell.service-now.com/" dv_assignment_group = "ITOPS-DCE-SELLER-SUPPORT" dv_u_cim_true="true" | where like(dv_incident_state,"Active") AND NOT like (dv_incident_state,"Resolved") AND NOT like (dv_incident_state,"Closed") | dedup dv_incident_state | stats count by dv_incident_state, dv_number,dv_active
I have the following data that I would like to parse and put into a line chart.  There are millions of rows of data, and I'm looking to find tasks that seem to take the longest.  I can't for the life... See more...
I have the following data that I would like to parse and put into a line chart.  There are millions of rows of data, and I'm looking to find tasks that seem to take the longest.  I can't for the life of me get it to parse, even after reading the many accepted answers.  Any help would be greatly appreciated. Here's a sample of the data.     { "latency_info": [{ "started": "0", "task": "Start" }, { "started": "0", "task": "api-routing" }, { "started": "1", "task": "api-cors" }, { "started": "1", "task": "api-client-identification" }, { "started": "1", "task": "api-rate-limit" }, { "started": "1", "task": "api-security" }, { "started": "1", "task": "api-execute" }, { "started": "1", "task": "assembly-invoke" }, { "started": "2", "task": "api-result" } ] }      
Hi All, This might sounds obvious but i've spend sometimes on this without getting right output. Actually I have an xml dashboard with a Time Input that has several values (Today, yesterday ...) an... See more...
Hi All, This might sounds obvious but i've spend sometimes on this without getting right output. Actually I have an xml dashboard with a Time Input that has several values (Today, yesterday ...) and for today's date selection I want to show yesterday data and last week data as bar chart, Also for anything other than today's date selection from Time Input I want to show that day data along with its previous week data. Thanks
Hi All, 2021-07-12 09:33:20,659 - daemons.save_claim_dex.src.__main__ - INFO - Skill='SAVE_CLAIM_INFO', message='skill execution info', ActivationId='3b660cbf-77c0-4999-a76c-aca5833aa3ca', Method='P... See more...
Hi All, 2021-07-12 09:33:20,659 - daemons.save_claim_dex.src.__main__ - INFO - Skill='SAVE_CLAIM_INFO', message='skill execution info', ActivationId='3b660cbf-77c0-4999-a76c-aca5833aa3ca', Method='POST', TxnStatus='SUCCESS', StatusCode='200', TxnTimeTaken='34.358', StartTimestamp='1626082400.624848', EndTimestamp='1626082400.659206', TxnStartTime='2021-07-12T09:33:20.624848', TxnEndTime='2021-07-12T09:33:20.659206' 2021-07-12 09:33:20,582 - daemons.dex_gen_resp.src.__main__ - INFO - Skill='DEX_GENERATE_RESPONSE', message='skill execution info', ActivationId='3b660cbf-77c0-4999-a76c-aca5833aa3ca', Method='POST', TxnStatus='SUCCESS', StatusCode='200', TxnTimeTaken='91.984', StartTimestamp='1626082400.490515', EndTimestamp='1626082400.582499', TxnStartTime='2021-07-12T09:33:20.490515', TxnEndTime='2021-07-12T09:33:20.582499' 2021-07-12 09:33:20,435 - daemons.save_claim_dex.src.__main__ - INFO - Skill='SAVE_CLAIM_INFO', message='skill execution info', ActivationId='3b660cbf-77c0-4999-a76c-aca5833aa3ca', Method='POST', TxnStatus='SUCCESS', StatusCode='200', TxnTimeTaken='49.063', StartTimestamp='1626082400.386638', EndTimestamp='1626082400.435701', TxnStartTime='2021-07-12T09:33:20.386638', TxnEndTime='2021-07-12T09:33:20.435701' From the above log, I need two splunk queries. 1. trace the activationid which processed different skills, provide end to end response time which should include all the skills response times. 2.  grouping info-skill fields response times [ex SAVE_CLAIM_INFO  ,DEX_GENERATE_RESPONSE in table format]    
Hi, I am trying to create a query to highlight when specified accounts are used outside of their corresponding IP range by using a csv Lookup table. For example, user account 'user1' has signed in f... See more...
Hi, I am trying to create a query to highlight when specified accounts are used outside of their corresponding IP range by using a csv Lookup table. For example, user account 'user1' has signed in from source ip 10.0.0.200 but they are only meant to sign in from 10.0.0.0/25 or 11.0.0.0/25 or 12.0.0.0/25, The csv file would be like follows: User, allowed_cidr_range1, allowed_cidr_range2, allowed_cidr_range3 User1, 10.0.0.0/25, 11.0.0.0/25, 12.0.0.0/25 User 2, 10.0.0.128/25, 11.0.0.128/25  User 3 10.0.1.0/25 Note that some accounts have a single range, some multiple. Does anyone know how I could make an appropriate Lookup command that will only show user accounts that have been used outside of their designated ip ranges?
Hi,    I'm new in working with Splunk - I began to explore the program last monday... I have the task to create a dashboard for visualizing the availibility of a machine. My working base: The mach... See more...
Hi,    I'm new in working with Splunk - I began to explore the program last monday... I have the task to create a dashboard for visualizing the availibility of a machine. My working base: The machine data, additional  disorder reports by workers (implemented by tablets) and the knowledge about the working times (monday till friday, 5.30am till 10.30pm). Now I want to use the daily data in timerange from 5.30am till 10.30pm because the availibility should only represents the disorder times in relation to real working time. How could I do that without a specified date?   Many Thanks and Greetings from Germany,  Felix
Hello, I am looking to forward all network traffic from a singular container (acting as a honeypot) to a Splunk Stream instance. Currently, I am performing this locally, with a free trial of Splunk,... See more...
Hello, I am looking to forward all network traffic from a singular container (acting as a honeypot) to a Splunk Stream instance. Currently, I am performing this locally, with a free trial of Splunk, before eventually performing this online. I have attempted various methods, and it seems an Independent Stream Forwarder appears the most appropriate. However, I am struggling to install an Independent Stream Forwarder in a Docker container. Below you will find an image of the current iteration of the dockerfile and corresponding run output. Unfortunately, in this example, the log file is not generated, hindering the debug process.   I am well aware that the implementation above is not compliant with the one application, one container rule. However, it appears necessary for my intended use case. Furthermore, the logging alternatives such as Docker's Splunk Logging Driver and Splunk's Docker Logging Plugin, don't seem appropriate for this use case.   Thanks in advance.
Hi there Trying to track down events that have a condition where they appear on days different to one another. E.g. if I have 5 events that have field ID=foo that all appear on the same day (e.g. M... See more...
Hi there Trying to track down events that have a condition where they appear on days different to one another. E.g. if I have 5 events that have field ID=foo that all appear on the same day (e.g. Monday) and another 5 that have ID=bar but are spread out over several days (e.g. Monday, Thursday, Saturday) I only want to return the ID=bar results. Basically want to do this for any events with matching ID's that repeatedly occur on multiple different days. What would be the best way to do this?
Hello in our environment we are already using UF on 15K servers, but they are sending logs to Indexer cluster on default port and without any compression. We now need to enable SSL compression on th... See more...
Hello in our environment we are already using UF on 15K servers, but they are sending logs to Indexer cluster on default port and without any compression. We now need to enable SSL compression on them to save bandwidth. For this we would be using different port than default one. But what i need to know is if we enable ssl compression in outputs.conf of all servers what will happen where server does not have connectivity to indexers on that port.  will it break logs being sent to indexer or will it fall back to sending uncompressed data on default port?
Hi, I am trying to get secure comms between a Forwarder and Indexer up and running using self signed certs but depite following the relevant guides (https://docs.splunk.com/Documentation/Splunk/8.2... See more...
Hi, I am trying to get secure comms between a Forwarder and Indexer up and running using self signed certs but depite following the relevant guides (https://docs.splunk.com/Documentation/Splunk/8.2.1/Security/Howtoself-signcertificates) I keep ending up with the same problem. I'm generating the self signed cert on a deployment server, creating the RootCA cert, servercert and serverprivate key before transferring them to the Indexer and Forwarder. Once on these I'm creating a newserver cert by combining the 3 files. I've also created the relevant inputs.conf, outputs.conf and server.conf files using the config guide. It does say to use "password = <string>" in both inputs and outputs conf files but this kicks up an error as it is deprecated so I've used "sslPassword" instead. After restarting splunkd in the splunkd log on the Indexer I'm getting: ERROR TcpInputProc - Error encountered for connection from src=10.1.1.34:50772. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol I've tried searching for the error and trying various other fixes e.g specifying sslVersions or cipherSuite but I'm still getting the above error. Could any one offer some help as to where I may be going wrong please? I've copied the conf files and some outputs from the splund.logs.    Forwarder outputs.conf [tcpout:group1] server = 10.1.1.20:9997 disabled = 0 clientCert = /opt/splunk/etc/auth/mycerts/myNewServerCertificate.pem sslPassword = <key used to generate myServerPrivateKey.key> useClientSSLCompression = true   Forwarder server.conf [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCACertificate.pem   Forwarder splunkd log cat /opt/splunk/var/log/splunk/splunkd.log | grep SSL 07-08-2021 10:19:22.919 +0100 INFO loader - Setting SSL configuration. 07-08-2021 10:19:22.919 +0100 INFO loader - Server supporting SSL versions SSL3,TLS1.0,TLS1.1,TLS1.2 07-08-2021 10:19:46.393 +0100 INFO MongodRunner - Using mongod command line --sslMode requireSSL 07-08-2021 10:19:47.957 +0100 INFO TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with Non-SSL cat /opt/splunk/var/log/splunk/splunkd.log | grep TcpOut 07-08-2021 10:43:42.172 +0100 INFO TcpOutputProc - Found currently active indexer. Connected to idx=10.1.1.20:9997, reuse=1.   Indexer inputs.conf [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myNewServerCertificate.pem sslPassword = <key used to generate myServerPrivateKey.key> requireClientCert = false useSSLCompression = false   Indexer server.conf [sslConfig] sslPassword = $7$YNwWFOGvWECUWkppnTLseT5sGq3wJs72wGEjlZuHDphTK3Jty2nhPQ== sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCACertificate.pem   Indexer splunkd.log cat /opt/splunk/var/log/splunk/splunkd.log | grep SSL 07-08-2021 10:29:02.382 +0100 INFO ServerConfig - SSL session cache path enabled 0 session timeout on SSL server 300.000 07-08-2021 10:29:02.520 +0100 INFO loader - Setting SSL configuration. 07-08-2021 10:29:02.520 +0100 INFO loader - Server supporting SSL versions SSL3,TLS1.0,TLS1.1,TLS1.2 07-08-2021 10:29:03.093 +0100 INFO MongodRunner - Using mongod command line --sslMode requireSSL 07-08-2021 10:29:04.886 +0100 INFO TcpInputConfig - Creating FwdDataSSLConfig SSL context. Will open port=IPv4 port 9997 with compression=1 07-08-2021 10:29:04.914 +0100 INFO TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk (SSL) 07-08-2021 10:29:04.915 +0100 INFO TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with SSL 07-08-2021 10:32:14.117 +0100 ERROR TcpInputProc - Error encountered for connection from src=10.1.1.34:50770. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol 07-08-2021 10:32:14.118 +0100 ERROR TcpInputProc - Error encountered for connection from src=10.1.1.34:50772. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol cat /opt/splunk/var/log/splunk/splunkd.log | grep Tcp 07-08-2021 10:29:04.885 +0100 INFO TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk 07-08-2021 10:29:04.886 +0100 INFO TcpInputConfig - Creating FwdDataSSLConfig SSL context. Will open port=IPv4 port 9997 with compression=1 07-08-2021 10:29:04.914 +0100 INFO TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk (SSL) 07-08-2021 10:29:04.915 +0100 INFO TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with SSL 07-08-2021 10:29:05.308 +0100 INFO TcpOutputProc - _isHttpOutConfigured=NOT_CONFIGURED 07-08-2021 10:32:14.117 +0100 ERROR TcpInputProc - Error encountered for connection from src=10.1.1.34:50770. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol 07-08-2021 10:32:14.118 +0100 ERROR TcpInputProc - Error encountered for connection from src=10.1.1.34:50772. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
I have looked at the configs files and they are configured the same however. im seeing the error below in the logs :  ERROR [140550481172224] (CaptureServer.cpp:2295) stream.CaptureServer - /en-us... See more...
I have looked at the configs files and they are configured the same however. im seeing the error below in the logs :  ERROR [140550481172224] (CaptureServer.cpp:2295) stream.CaptureServer - /en-us/custom/splunk_app_stream/indexers?streamForwarderId=*********.tracfone.com status=504
Hi regex for extract module name    here is the log: 15:25:36.999 user module_W: A[00]B[0000000]C[0]L: process read compeleted! the module name is "module_W" after user "start  with star wildca... See more...
Hi regex for extract module name    here is the log: 15:25:36.999 user module_W: A[00]B[0000000]C[0]L: process read compeleted! the module name is "module_W" after user "start  with star wildcard till colon"   Any idea? Thanks,