All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to extract field for Symantec ProxySG with transform.conf & props.conf but it isn't working. Here is the sample logs: Aug  4 16:31:58 2024-08-04 08: 31:28 "hostname" 5243 xx.xx.xx.xx 2... See more...
I'm trying to extract field for Symantec ProxySG with transform.conf & props.conf but it isn't working. Here is the sample logs: Aug  4 16:31:58 2024-08-04 08: 31:28 "hostname" 5243 xx.xx.xx.xx 200 TCP_TUNNELED 6392 2962 CONNECT tcp domain.com 443 / - yyyy - xx.xx.xx.xx xx.xx.xx.xx "None" - - - - OBSERVED - - xx.xx.xx.xx - 7b711515341865e8-0000000008da5077-0000000066af3c5e - - Here is my configuration:  REGEX = ^.*"CN-SH-PSG-01"\s+(?<bytes_in>\d+)\s+(?<client_ip>\d+\.\d+\.\d+\.\d+)\s+(?<status_code>\d+)\s+(?<action>[^\s]+)\s+(?<bytes_out>\d+)\s+(?<bytes_out2>[^\s]+)\s+(?<http_method>[^\s]+)\s+(?<protocol>[^\s]+)\s+(?<domain>[^\s]+)\s+(?<port>\d+)\s+[^\s]+\s+(?<user>[^\s]+)\s+[^\s]+\s+[^\s]+\s+(?<mime_type>[^\s]+)\s+[^\s]+\s+"(?<user_agent>[^"]+)"FORMAT = bytes_in::$1 client_ip::$2 status_code::$3 action::$4 bytes_out::$5 bytes_out2::$6 http_method::$7 protocol::$8 domain::$9 port::$10 user::$11 mime_type::$12 user_agent::$13 [source::syslog] TRANSFORMS-proxysg_field_extraction = proxysg_field_extraction   I've tried to change the config but the result teh field is not extracted & I have tried my regex using regex101.com and is doing fine
While adding device to the add-on citrix it gives the following message:   Failed to verify your SSL certificate. Verify your SSL configurations in splunk_ta_citrix_netscaler_settings.conf and retr... See more...
While adding device to the add-on citrix it gives the following message:   Failed to verify your SSL certificate. Verify your SSL configurations in splunk_ta_citrix_netscaler_settings.conf and retry. where can is solve this issue through GUI interface as i can't access CLI
I want to get below in single query 1. dc of field1 overall 2. dc of field2 by field1
Hello Splunkers!! I am getting below message populating while loading Splunk dashboard as I am using one javascript and css in the dashboard. Please help to fix this error so I will completely dis... See more...
Hello Splunkers!! I am getting below message populating while loading Splunk dashboard as I am using one javascript and css in the dashboard. Please help to fix this error so I will completely disappear. Error while loading Splunk dashboard. Error showing in developer console.  
Hi Splunkers! I wish to get data in a specific time range using earliest and latest command . I have checked with time picker events are there within the specified range. But when I am trying to r... See more...
Hi Splunkers! I wish to get data in a specific time range using earliest and latest command . I have checked with time picker events are there within the specified range. But when I am trying to run a spl query its not working : I have tried with ISO format and custom format as shown below . When I use ISO format its giving error index=main sourcetype="access_combined_wcookie" earliest="2024-01-15T20:00:00" latest="2024-02-22T20:00:00" And when I use custom format as shown below its returning 0 events: index=main sourcetype="access_combined_wcookie" earliest="1/15/2024:20:00:00" latest="2/22/2024:20:00:00"   Please help I want to do this using earliest and latest command only
all the dashboard saying the data model  is not found 
Im trying to create some dashboards to make reading _internal logs easier. I'm trying to figure out what all for the fields we are getting in are. This Splunk Doc has the gist of what I am looking at... See more...
Im trying to create some dashboards to make reading _internal logs easier. I'm trying to figure out what all for the fields we are getting in are. This Splunk Doc has the gist of what I am looking at, but we have more fields than that.    In the doc it mentions something about apache and its logs, and whereas we do use apache, im not well versed in it enough to understand what i was looking at fully. I think we are adding in extracted fields, or adding in values in the processing that Splunk does. How can i track down what .conf file is adding the fields. Id like to have a better understanding of where these values come from. There are a lot more fields than the _raw logs seem to have. Like metadata. 
I request that there be the ability to create groups of users in enterprise security so that when you need to add them to an investigation you dont have go to through and select your whole team indiv... See more...
I request that there be the ability to create groups of users in enterprise security so that when you need to add them to an investigation you dont have go to through and select your whole team individually, instead you can have group that have those people in them, like group IR and group SOC. If you have team of more than 3 people it gets old having to add them all individually. I would also request a notes section where you can place notes on the investigation for all collaborators to see.
Hi, I have a dashboard created in dashboard studio . This contains two drop downs (Country and State).. Both of them are interdependent on each otherc to display values and has default value set t... See more...
Hi, I have a dashboard created in dashboard studio . This contains two drop downs (Country and State).. Both of them are interdependent on each otherc to display values and has default value set to * (All). When I select value from country dropdown , it loads its states in dropdown 2.  But when I do the selection second time, Example -  USA -->  India. States will still have California in display. But on clicking  states dropdown later,t I can see India related states with California as first value. Is there a way that dropdown selection can be reset automatically based on other dropdown value. Example, when I select USA , states of USA must be displayed in states dropdown. When I change selection from USA to India, states dropdown should be either All or first value of Indian state.  Please help on this. Regards, PNV  
Team I just was able to create a search in Splunk to detect Credit Card numbers. PCI was also onboarded into our new Splunk Cloud instance. How can we obscure these numbers once found and verified t... See more...
Team I just was able to create a search in Splunk to detect Credit Card numbers. PCI was also onboarded into our new Splunk Cloud instance. How can we obscure these numbers once found and verified to be in fact an exposed user credit card number?
I have a field message that when I run the search index=example123 host=5566 |search "*specials word*" I table message it displays as an example below:  2024-08-02 16:45:21- INFO Example ([... See more...
I have a field message that when I run the search index=example123 host=5566 |search "*specials word*" I table message it displays as an example below:  2024-08-02 16:45:21- INFO Example (['test1' , 'test2', 'test3', 'test4', 'test5', 'test6', 'test7)'] , ['Medium', 'Large ', 'Small', 'Small ', 'Large ', 'Large ', 'Large ']) Is there a way to run a command so that the data in the field "Message"  can be extracted into their own fields or displayed like this matching 1:1 on a table  test1           test2       test3        test4         test5           test6          test7 Medium     Large      Small        Small         Large        Large          Large or test1 = Medium  test2= Large  test3 = Small .... ect  
There are certain data labs created while the data stops indexing from some data labs, It should index data after every 15 minutes but that’s not happening. And those data gets reflected on the Splun... See more...
There are certain data labs created while the data stops indexing from some data labs, It should index data after every 15 minutes but that’s not happening. And those data gets reflected on the Splunk dashboard. Can any one assist or suggest why the data for some datalabs are not indexing every 15 mins?
Hello Everyone, I want to integrate Power BI with Splunk and view Power BI logs in Splunk for analysis. Can someone explain how to integrate Power BI with Splunk and how to get logs from Power BI in... See more...
Hello Everyone, I want to integrate Power BI with Splunk and view Power BI logs in Splunk for analysis. Can someone explain how to integrate Power BI with Splunk and how to get logs from Power BI into Splunk?  
Hi, I'm trying to plot some data, over one chart for 2 different months not consecutive. i.e January and August, looking to the below post https://www.splunk.com/en_us/blog/tips-and-tricks/two-tim... See more...
Hi, I'm trying to plot some data, over one chart for 2 different months not consecutive. i.e January and August, looking to the below post https://www.splunk.com/en_us/blog/tips-and-tricks/two-time-series-one-chart-and-one-search.html trying to calculate median and plot just those 2 months in a single month timeframe the below would work for consecutive months but can not figure out how to eval my time for random months, if I add to my info_min_time then my marker is ploted over several months.             earliest="1/1/2024:00:00:00" | bin span=1h _time | addinfo | eval marker = if(_time < info_min_time + 60*24*3600, "January","Febuary")| eval _time = if(_time < info_min_time + 60*24*3600, _time + 60*24*3600, _time) | chart count max(data) by _time marker          
Here is the my output data. i want to create a table for path and responsetime . can you please help. Expecting output is below: path                                                                ... See more...
Here is the my output data. i want to create a table for path and responsetime . can you please help. Expecting output is below: path                                                                                     responsetime /rkedgeapp/provider/dental/keysearch/           156   {"time": 1722582494370,"host1": "arn:aws:firehose:ca-central-1:2222222:deliverystream/Splunk-Kinesis-apigateway-CA","source1": "rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl6e/prod","event": "{ \"requestId\":\"d85fa529-3979-44a3-9018-21f81e12eafd\", \"ip\": \"40.82.191.190\", \"caller\":\"-\", \"user\":\"-\",\"requestTime\":\"02/Aug/2024:07:08:14 +0000\", \"httpMethod\":\"POST\",\"resourcePath\":\"/{proxy+}\", \"status\":\"200\",\"protocol\":\"HTTP/1.1\", \"responseLength\":\"573\", \"clientCertIssuerDN\":\"C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Organization Validation Secure Server CA\", \"clientCertSerialNumber\":\"22210811239199552309700144370732535146\", \"clientCertNotBefore\":\"Jan 22 00:00:00 2024 GMT\", \"clientCertNotAfter\":\"Jan 21 23:59:59 2025 GMT\", \"path\":\"/rkedgeapp/provider/dental/keysearch/\", \"responsetime\":\"156\" }"}
I have installed AppDynamics locally with a private Synthetic server + PSA Agent in the same host and Controller, EUM in separate hosts. Now my issue is that whenever I try to schedule a job in App... See more...
I have installed AppDynamics locally with a private Synthetic server + PSA Agent in the same host and Controller, EUM in separate hosts. Now my issue is that whenever I try to schedule a job in AppDynamics in the User Experience dashboard the agent hits the provided URL with some response. It is displayed on the session tab but only one hit is being registered and showed despite scheduling the job to run in every minute.  We have looked into the scheduler-shepherd logs where we're getting below logs: INFO 2024-08-01 10:13:24,121 SyntheticBackgroundScheduler_Worker-16 JobRunShell Job background-job.com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher threw a JobExecutionException: ! java.io.IOException: error=2, No such file or directory ! at java.base/java.lang.ProcessImpl.forkAndExec(Native Method) ! at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314) ! at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244) ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110) ! ... 12 common frames omitted ! Causing: java.io.IOException: Cannot run program "jps" (in directory "."): error=2, No such file or directory ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143) ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073) ! at java.base/java.lang.Runtime.exec(Runtime.java:594) ! at org.apache.commons.exec.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:58) ! at org.apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:254) ! at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:319) ! at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160) ! at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147) ! at com.appdynamics.synthetic.cluster.local.PSCluster.findAllInstanceIds(PSCluster.java:58) ! ... 4 common frames omitted ! Causing: com.appdynamics.synthetic.cluster.Cluster$ClusterException: Failed to query active schedulers. ! at com.appdynamics.synthetic.cluster.local.PSCluster.findAllInstanceIds(PSCluster.java:67) ! at com.appdynamics.synthetic.cluster.local.PSCluster.findHealthyInstanceIds(PSCluster.java:46) ! at com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher.execute(ClusterWatcher.java:49) ! ... 2 common frames omitted ! Causing: org.quartz.JobExecutionException: Exception while attempting to find healthy instances ! at com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher.execute(ClusterWatcher.java:52) ! at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ! at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) WARN 2024-08-01 10:13:27,484 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc spi idx-requestId=96a2de91531b226a, idx-account=********* javax.persistence.spi::No valid providers found. INFO 2024-08-01 10:13:27,502 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc LicenseHelper idx-requestId=96a2de91531b226a, idx-scheduleId=02a54278-0b5f-4a0e-95f1-9299cd9b4dbc, idx-appKey=*********, idx-account=********* Job with id 02a54278-0b5f-4a0e-95f1-9299cd9b4dbc and description http://172.16.21.70 previously used 1 units of license INFO 2024-08-01 10:13:27,502 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc LicenseHelper idx-requestId=96a2de91531b226a, idx-scheduleId=02a54278-0b5f-4a0e-95f1-9299cd9b4dbc, idx-appKey=*********, idx-account=********* Job with id 02a54278-0b5f-4a0e-95f1-9299cd9b4dbc and description http://172.16.21.70 now has 1 pages and 1 locations and requires 1 units of license INFO 2024-08-01 10:13:27,505 hystrix-SchedulerGroup-13 SynthJobRunner Will run 1 measurement(s) for job=Schedule{account='*********', appKey='*********', id='02a54278-0b5f-4a0e-95f1-9299cd9b4dbc', description='http://172.16.21.70'} INFO 2024-08-01 10:13:27,505 hystrix-SchedulerGroup-13 SynthJobRunner Requesting measurement for job=Schedule{account='*********', appKey='*********', id='02a54278-0b5f-4a0e-95f1-9299cd9b4dbc', description='http://172.16.21.70'}, location=NOD:*********, browser=IE11 INFO 2024-08-01 10:13:27,512 hystrix-SchedulerGroup-13 SynthJobRunner Updating state for schedule WARN 2024-08-01 10:13:33,178 okhttp-eventsource-stream-[]-0 DataSource Error in stream connection (will retry): java.net.UnknownHostException: stream.launchdarkly.com: Name or service not known INFO 2024-08-01 10:13:33,178 okhttp-eventsource-stream-[]-0 DataSource Waiting 22193 milliseconds before reconnecting... WARN 2024-08-01 10:13:33,178 okhttp-eventsource-events-[]-0 DataSource Encountered EventSource error: java.net.UnknownHostException: stream.launchdarkly.com: Name or service not known WARN 2024-08-01 10:13:40,054 hz.epic_ramanujan.cached.thread-9 MulticastService [172.18.0.1]:5701 [dev] [5.3.2] Sending multicast datagram failed. Exception message saying the operation is not permitted usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time. ! java.net.SocketException: Message too long ! at java.base/sun.nio.ch.DatagramChannelImpl.send0(Native Method) ! at java.base/sun.nio.ch.DatagramChannelImpl.sendFromNativeBuffer(DatagramChannelImpl.java:901) ! at java.base/sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:863) ! at java.base/sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:821) ! at java.base/sun.nio.ch.DatagramChannelImpl.blockingSend(DatagramChannelImpl.java:853) ! at java.base/sun.nio.ch.DatagramSocketAdaptor.send(DatagramSocketAdaptor.java:218) ! at java.base/java.net.DatagramSocket.send(DatagramSocket.java:664) ! at com.hazelcast.internal.cluster.impl.MulticastService.send(MulticastService.java:309) ! at com.hazelcast.internal.cluster.impl.MulticastJoiner.searchForOtherClusters(MulticastJoiner.java:113) ! at com.hazelcast.internal.cluster.impl.SplitBrainHandler.searchForOtherClusters(SplitBrainHandler.java:75) ! at com.hazelcast.internal.cluster.impl.SplitBrainHandler.run(SplitBrainHandler.java:42) ! at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77) ! at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ! at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ! at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ! at java.base/java.lang.Thread.run(Thread.java:840) ! at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ! at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:111) INFO 2024-08-01 10:13:40,409 Account Credential Cache-1 AccountCredentialCache Refreshing account credential cache INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Retrieved Accounts from EUMAccountResponse. Account list size: 1 Date: 2024-08-01T15:43:41.520+05:30 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 1} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 1 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} WARN 2024-08-01 10:13:42,479 hz.awesome_ramanujan.cached.thread-1 MulticastService [172.18.0.1]:5702 [dev] [5.3.2] Sending multicast datagram failed. Exception message saying the operation is not permitted usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time.   Please look into the above ERROR logs and kindly assist. I appreciate your quick response!!   Thanks
Good day, I am pretty new to Splunk and want a way to join two queries together. Query 1 - Gives me all of my assets | tstats count where index=_internal OR index=* BY host Query 2 - Give ... See more...
Good day, I am pretty new to Splunk and want a way to join two queries together. Query 1 - Gives me all of my assets | tstats count where index=_internal OR index=* BY host Query 2 - Give me all of my devices that ingest into the forwarder index="_internal" source="*metrics.log*" group=tcpin_connections | dedup hostname | table date_hour, date_minute, date_mday, date_month, date_year, hostname, sourceIp, fwdType ,guid ,version ,build ,os ,arch | stats count How can I join this to create a query that will find all my devices (query1) and check if they have the forwarder installed(query2) and show me the results of devices that are not in query 2?
Hello, How can I use transaction to Group events using fields and Group events using fields and time? I am new to splunk and I am preparing for the Splunk Core Certified Power User certification exa... See more...
Hello, How can I use transaction to Group events using fields and Group events using fields and time? I am new to splunk and I am preparing for the Splunk Core Certified Power User certification exam. I would be very happy if there is a resource where I can get comprehensive information. Thank you!
i have to create an alert to monitor any issue happens for HF to Indexers, by checking internal logs. I am using this spl. Need suggestions or correct SPL. index=_internal source=*metrics.log grou... See more...
i have to create an alert to monitor any issue happens for HF to Indexers, by checking internal logs. I am using this spl. Need suggestions or correct SPL. index=_internal source=*metrics.log group=tcpin_connections hostname="*hf*"
Hi, I want to rename the fields while writing to a lookup table using outputlookup command. Is there a way to do it? I intend to use the lookup table in the next run of the same query so want s... See more...
Hi, I want to rename the fields while writing to a lookup table using outputlookup command. Is there a way to do it? I intend to use the lookup table in the next run of the same query so want separate field names in lookup table. Thanks in advance for the suggestions.