All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone We have a problem when we want graphics a customer metrics on the dashboard: We can create it: We wait 24 hrs for create the dashboard and build the new graphics, but when we going... See more...
Hi everyone We have a problem when we want graphics a customer metrics on the dashboard: We can create it: We wait 24 hrs for create the dashboard and build the new graphics, but when we going to create it, the SAAS  show us a one metrics only: The other one is not possible graphics because appdynamics don't show us. What can we do for to see all customer metrcis on dashboard? regards Henry tellez
Hello and happy new year to all, As the title says I would like to have the list of servers that have connected over the last 14 days (Lastlogon)... I have tried several methods but nothing works, ... See more...
Hello and happy new year to all, As the title says I would like to have the list of servers that have connected over the last 14 days (Lastlogon)... I have tried several methods but nothing works, here is my query :  index=msad  SamAccountName=*$ VersionOS="Windows Server*" | eval llt=strptime(LastLogon,"%d/%m/%Y %H:%M:%S") | eval LastLogon2=strftime(llt, "%d/%m/%Y %H:%M:%S") | rex field=SamAccountName mode=sed "s/\$//g" | table Domain,SamAccountName,VersionOS,LastLogon2 Thanks 
Hello guys. I received this task at my job, and I still need money in my pocket, so please help me :))  I'm in a Network Operational team; maybe this will help you understand better the following de... See more...
Hello guys. I received this task at my job, and I still need money in my pocket, so please help me :))  I'm in a Network Operational team; maybe this will help you understand better the following description. So, In a single Splunk search I have to connect 2 databases, from different servers.  One DB contains "Incidents":  Incident ID, Start time of the Incident (Let's call it A), End time of the incident (B) The other DB contains  "Call Complaints":  The timestamp of each Call complaint (C). I need to find out the amount of call complaints for each incident. So, if C>=A AND C<=B, we count a call complaint for a specific incident, and we can move on to check the next C timestamp, and so on.  I have issues right from the start. I tried to connect the databases with the next syntax: | dbxquery query=[...]  connection=A | join               [ dbxquery  query=[...]  connection=B] But, when I try a table command to see the interesting fields for me (Incident ID, A, B, C), the fields from the joined DB are looking the same on each line.. Could you please help me with this? 
Hi All, I have got 2 set of logs, one of which has the Connector details and the other has got the error details if any connector gets failed.     First set: Log1: sink.http.apac.hk.dna.eap.a... See more...
Hi All, I have got 2 set of logs, one of which has the Connector details and the other has got the error details if any connector gets failed.     First set: Log1: sink.http.apac.hk.dna.eap.adobedmp.derived.int.aamrtevents FAILED|mwgcb-csrla02u RUNNING|mwgcb-csrla02u NA Log2: sink.http.apac.au.dna.eap.adobedmp.derived.int.aamrtevents RUNNING|mwgcb-csrla02u RUNNING|mwgcb-csrla02u NA Log3: sink.http.apac.ph.dna.eap.adobedmp.derived.int.aamrtevents FAILED|mwgcb-csrla01u FAILED|mwgcb-csrla01u NA Log4: sink.http.apac.th.dna.eap.adobedmp.derived.int.aamrtevents FAILED|swgcb-csrla01u FAILED|mwgcb-csrla01u NA Here I used the below query to extract the required fields: ... | rex field=_raw "(\w+\.)(?P<Component>\w+)\." | rex field=_raw "(?P<Connector_Name>(\w+\.){3,12}\w+)\s" | rex field=_raw "(?P<Connector_Name>(\w+\-){3,12}\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker1_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker1_ID | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | replace "NA" with "Not_Available" in Task2_State | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker2_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker2_ID | fillnull value="Not_Available" Task1_State,Worker1_ID,Task2_State,Worker2_ID Second set: Log1: } Log2: "type": "sink" Log3: "tasks": [], Log4: "name": "sink.http.apac.hk.dna.eap.adobedmp.derived.int.aamrtevents", Log5: }, Log6: "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8074" Log7: "trace": "org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed\nCaused by: javax.net.ssl.SSLProtocolException: Unexpected handshake message: server_hello\n\tat sun.security.ssl.Alert.createSSLException(Alert.java:129)\n\tat sun.security.ssl.Alert.createSSLException(Alert.java:117)\n\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:356)\n\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:312)\n\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:303)\n\tat sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:473)\n\tat sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:990)\n\tat sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:977)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:924)\n\tat org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:501)\n\tat org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:593)\n\tat org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:439)\n\tat org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:324)\n\tat org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:205)\n\tat org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:561)\n\tat org.apache.kafka.common.network.Selector.poll(Selector.java:498)\n\tat org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:575)\n\tat org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1358)\n\tat org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1289)\n\tat java.lang.Thread.run(Thread.java:750)\n", Log8: "state": "FAILED", Log9: "connector": { Log10: { And then the set repeats for the next Connector. I tried to extract the required fields using below queries: Query1: ... | regex _raw="name" | rex field=_raw "name\"\:\s\"(?P<Connector>[^\"]+)\"\," Query2: ... | rex field=_raw "Caused\sby\:\s(?P<Exception>[^\:]+)\:\s" | search Exception!="org.apache.kafka.common.KafkaException" | rex field=_raw "Caused\sby\:\s([^\:]+)\:\s(?P<Error_Msg>(\w+\:\s){0,1}(\w+\s){1,15}\w+)"     Note that, both the set of logs have different source but same index. I want to create a table using both the set of logs such that along with the failed connector we can see the error details also. For e.g. Connector_Name Connector_State Worker_ID Task1_State Worker1_ID Task2_State Worker2_ID Exception Error_Msg sink.http.apac.hk.dna.eap.adobedmp.derived.int.aamrtevents FAILED mwgcb-csrla02u RUNNING mwgcb-csrla02u NA NA javax.net.ssl.SSLProtocolException Unexpected handshake message sink.http.apac.au.dna.eap.adobedmp.derived.int.aamrtevents RUNNING mwgcb-csrla02u RUNNING mwgcb-csrla02u NA NA NA NA   I need to Connector name from second set and compare with first set, and whichever connector has failed state, I need to add the Exception and Error_Msg there. Please help me to get my desired table. Your kind help is highly appreciated.   Thank you..!!
Hi everyone, I'm very new here. I need support with extracting  this field,  "safeframe.googlesyndication.com"  from "ofc62fbe04078e8d3b0843298ad3421d.safeframe.google syndication.com" using regex e... See more...
Hi everyone, I'm very new here. I need support with extracting  this field,  "safeframe.googlesyndication.com"  from "ofc62fbe04078e8d3b0843298ad3421d.safeframe.google syndication.com" using regex expressions or is there any other command I can use to delete the crap before the urlhost? Thank you.
Splunk Enterprise 9.0.1. We upgrade DB Connect from 3.7.0. to 3.11.1 and we started to notice that some inputs were not working. Also on _internal we found an event that lists inputs that require m... See more...
Splunk Enterprise 9.0.1. We upgrade DB Connect from 3.7.0. to 3.11.1 and we started to notice that some inputs were not working. Also on _internal we found an event that lists inputs that require migration. INFO c.s.d.s.m.s.DbInputToModularInputInstanceMigrator - DB Inputs requiring migration: [...] The actual documentation doesnt say anything about this migration. https://docs.splunk.com/Documentation/DBX/3.11.1/DeployDBX/MigratefromDBConnectv1 Is there any other documentation on how to migrate to the latest version? UPDATE: We found that the cron expression validation changed between versions: '1/10 * * * *' was a valid expression in 3.7, now its no longer valid. We have to do '1-59/10 * * * *'. Also we have to disable and then enable the input for it to start working again, changing the cron is not enough.  
Hello,  Using dashboard studio (DS), and having an issue formatting a table with a single column based on its value. The field values look something like this. CPU: 37.1% C MEM: 60.2% W NET: 3.... See more...
Hello,  Using dashboard studio (DS), and having an issue formatting a table with a single column based on its value. The field values look something like this. CPU: 37.1% C MEM: 60.2% W NET: 3.6% N Here is the pseudocode for selecting the background color of the cell based on its value. If the value contains at least one "% C" the background color of the cell needs to be red. Else if the value contains at least one "% W", the background color of the cell needs to be yellow. Else if the value does not contain "% C" AND does not contain "% W",  the background color of the cell needs to be green. End if. According to some web page I read about DS, you cannot perform any inline CSS or JS functions. I don't believe I need JS for this simple use case. I read here, Selector and formatting functions, that I can use matchValue(). After resolving all the json errors (brackets and braces missing), no change in the color of cell background. Thanks in advance and God bless, Genesius
Hello, Can someone please kindly help me with the Javascript for the below criteria. create a JavaScript that calls PagerDuty using a button, eventually passing parameters from the dashboard. ... See more...
Hello, Can someone please kindly help me with the Javascript for the below criteria. create a JavaScript that calls PagerDuty using a button, eventually passing parameters from the dashboard. Thanks
Hi there,  I have a dashboard with bar charts.  The value of a bar chart would be in this format. e.g. "foo-10-bar".  After an user clicks on the bar-chart, it leads to another dashboard w... See more...
Hi there,  I have a dashboard with bar charts.  The value of a bar chart would be in this format. e.g. "foo-10-bar".  After an user clicks on the bar-chart, it leads to another dashboard with drilldown generating token with field values as $field$= "foo-10-bar" in this case.  Is this possible to split the field value in three parts with - as a separator, when user clicks on the  bar chart. so in this case, it would be three tokens as $FieldA$= foo; $FieldB$=10; $FieldC$=bar? Thanks. 
Hi there, I have a use case to query internal and external ip addresses of the host which has UF installed. I am using approach below and hoping for a better solution. Appreciate your help in advan... See more...
Hi there, I have a use case to query internal and external ip addresses of the host which has UF installed. I am using approach below and hoping for a better solution. Appreciate your help in advance! For external IP: index=_internal group=tcpin_connections hostname=*  This will provide me sourceIp (external ip) For Internal IP: index=_internal sourcetype=splunkd_access phonehome | rex command to retrieve internal ip from the string Is this the correct approach? I was hoping for a single search to retrieve both IPs.  
Hi All, I am trying to tune up a notable called DNS Query Length Outliers Using the MLTK App to set up the data, but the number of the notables remain the same. Am I doing something wrong? I follo... See more...
Hi All, I am trying to tune up a notable called DNS Query Length Outliers Using the MLTK App to set up the data, but the number of the notables remain the same. Am I doing something wrong? I followed some instructions on how to build the data model required for the notable to work, but still no luck. Worth mention that when I run the SPL in the Search, it delivers different number of notables. What  option shall I use from the "Experiments" within the MLTK App to make the data work for the notable. The code is from here: https://github.com/splunk/security_content/blob/develop/detections/experimental/network/dns_query_length_outliers___mltk.yml   Thank you in advance.
Will your SSL Certificate Checker work for Windows domain controllers?
Hi All,  I am getting below error Send Microsoft SQL Server table data to splunk index through splunk DB Connect 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.dbx.server.dbin... See more...
Hi All,  I am getting below error Send Microsoft SQL Server table data to splunk index through splunk DB Connect 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action=write_records batch_size=1000 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector record_count=1000 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketException: Connection reset at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:46) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:141) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:113) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorLoadBalancer.uploadEvents(HttpEventCollectorLoadBalancer.java:52) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:234) at org.easybatch.core.job.BatchJob.call(BatchJob.java:102) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.net.SocketException: Connection reset at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:46) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:141) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:113) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorLoadBalancer.uploadEvents(HttpEventCollectorLoadBalancer.java:52) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:234) at org.easybatch.core.job.BatchJob.call(BatchJob.java:102) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] INFO org.easybatch.core.job.BatchJob - Job 'testing_1234' finished with status: FAILED 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=closing_db_reader task=testing_1234 2023-01-19 14:02:56.930 +0000 [dw-67 - PUT /api/inputs/testing_1234] INFO com.splunk.dbx.server.task.DefaultTaskService - action=removing_task_from_scheduler task=testing_1234 type=input 2023-01-19 14:02:56.931 +0000 [dw-67 - PUT /api/inputs/testing_1234] INFO org.easybatch.extensions.quartz.JobScheduler - Unscheduling job org.easybatch.core.job.BatchJob@279b98a9 2023-01-19 14:09:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:19:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:29:44.304 +0000 [Checkpoint-Cleaner-0] INFO c.s.d.s.m.CheckpointCleaner$CheckpointCleanerTask - action=start_checkpoint_cleaner_task 2023-01-19 14:29:44.305 +0000 [Checkpoint-Cleaner-0] INFO c.s.d.s.m.CheckpointCleaner$CheckpointCleanerTask - action=checkpoint_files_to_be_deleted [] 2023-01-19 14:29:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:39:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:49:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:59:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1
Hello Splunkers, I'm having this specific use case : I need to retrieve Windows OS logs from multiple machine via Splunk UF and Splunk Microsoft TA, forwards those logs to an intermediate HF in char... See more...
Hello Splunkers, I'm having this specific use case : I need to retrieve Windows OS logs from multiple machine via Splunk UF and Splunk Microsoft TA, forwards those logs to an intermediate HF in charge of parsing and anonymization, and then forward again to another HF. Since an image worth 1000 words, I imagine those two architecture possibilities :  Choice 1 : Choice 2 :   Points I wonder about : - Within choice 1, is it a problem to have props and transforms in a apps deployed on UFs ? Same question, is it a problem to have some inputs.conf on the Linux HF1 ? - I will have other use case like this, would you recommend to gather all anonymization (within props & transforms) within a dedicated app, or would you put those in each TA related to the use case (so basically here like choice 1 of architecture) ?   Thanks a lot for your help ! GaetanVP  
Hello, I have a Regex for splitting a Person full name into Person lastname, firstname and middlename. Regex used: (?<prsnl_last>\w+)([\s]*(?<prsnl_credentials>[\w]*)|)[\s]*,[\s]*(?<prsnl_first>\... See more...
Hello, I have a Regex for splitting a Person full name into Person lastname, firstname and middlename. Regex used: (?<prsnl_last>\w+)([\s]*(?<prsnl_credentials>[\w]*)|)[\s]*,[\s]*(?<prsnl_first>\w+)([\s]*(?<prsnl_middle>[\w]+.*?)|)   Now I would need to remove prsnl_last from the output. (Basically, to scrub the data from prsnl_last event) Output should be something like this. "Haikal" and "Campbeli" should be removed. Can someone please help me out? Thank you!
I'm trying to get data by registering it as a Splunk script using Python code. But the problem only occurs when I run this code in Splunk. When I run it directly in Splunk Python locally, there i... See more...
I'm trying to get data by registering it as a Splunk script using Python code. But the problem only occurs when I run this code in Splunk. When I run it directly in Splunk Python locally, there is no problem,   but when I run it from Splunk, I get the following error.   The error contents are as follows. Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known   So I searched and tried using http scheme and requests.get(verift=false), but the same error occurs. There is no error when I run it locally, so it seems to be a problem when sending a request from Splunk.     If anyone knows, I'd appreciate any help.     import requests as req import json def tmdb_api_call(requestURL, parameters): response = req.get(url=requestURL, params=parameters) if response.status_code != 200: print(response.json()) exit() data = response.json() return json.dumps(data) def get_upcoming_movies_by_page(api_key, page_number=1): requestURL = "http://api.themoviedb.org/3/movie/upcoming" parameters = {"api_key": api_key, "page": page_number} return tmdb_api_call(requestURL, parameters) def main(): api_key = "key" upcoming_movie_list = get_upcoming_movies_by_page(api_key, 1) data = json.loads(upcoming_movie_list) print(json.dumps(data["results"])) main()     (code from https://github.com/siddharthajuprod07/youtube)  
Hi, I recently came across this warning on Splunk web and was just wondering if anyone else has encountered this before and how to go about solving it? The warning is as follows: "Events are not d... See more...
Hi, I recently came across this warning on Splunk web and was just wondering if anyone else has encountered this before and how to go about solving it? The warning is as follows: "Events are not displayed in the search results because _raw fields exceed the limit of 16777216 characters. Ensure that _raw fields are below the given character limit or switch to the CSV serialization format by setting 'results_serial_format=csv' in limits.conf. Switching to the CSV serialization....." Any input is greatly appreciated and thank you in advance. Mikhael
Hi Splunkers, in our environment we have seen that we have some event peaks every day between certain hours, which are 03:00 - 04:00 AM.  Now, we have to identify which source are responsible for t... See more...
Hi Splunkers, in our environment we have seen that we have some event peaks every day between certain hours, which are 03:00 - 04:00 AM.  Now, we have to identify which source are responsible for those peaks. The search we must build has to perform this: tell us, for every day of a certain time period (for example, a week), at the same hour (03:00 - 04:00 AM), total count of events for every sources. I know that, with this search:   index=* | chart count over index by sourcetype   I'm able to show a column chart that spawn me the total count of events for every indexes and, for them, I have a picture of involved sourcetypes. But how to tell to search "show me for every day this count in time range 03:00 - 04:00" AM?
Hi, I have edited the inputs.conf file in app.tgz how we can compile it after editing the config file in windows.   ty
In my Splunk Cloud instance, I am ingesting WAF security events from a SaaS service via HEC. The events are in JSON format so my HEC data input is configured as a sourcetype of _json. I now need to ... See more...
In my Splunk Cloud instance, I am ingesting WAF security events from a SaaS service via HEC. The events are in JSON format so my HEC data input is configured as a sourcetype of _json. I now need to ingest the WAF request events from the Saas service, which are also in JSON format, so I'd like to send those to the same index, but with a different sourcetype to distinguish the two types of events.  How can I modify the sourcetype for the WAF security events  from _json to waf_sec and then create a new HEC data input for the WAF request events with a sourcetype of waf_req, yet retaining the JSON format? Thx