All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have a dataset that has improper line breaking on few of the events in it. We have added configuration to ingest future data with proper time stamp extractions and recognition. But we also ... See more...
Hi, We have a dataset that has improper line breaking on few of the events in it. We have added configuration to ingest future data with proper time stamp extractions and recognition. But we also want the already existing data to parse and show up properly.  Is there a way to perform the linebreaking\masking of already ingested data from the Search head by updating the configuration in props.conf?   Thanks in advance.  
Hi We installed  Microsoft Azure Add-on for Splunk (https://splunkbase.splunk.com/app/3757/ version 2.1.0) is there any doc we can use for Azure side configuration   ? Now for Splunk we  are ... See more...
Hi We installed  Microsoft Azure Add-on for Splunk (https://splunkbase.splunk.com/app/3757/ version 2.1.0) is there any doc we can use for Azure side configuration   ? Now for Splunk we  are getting the following error is there any config doc for the configuration that should be done from Azure side ? 2020-06-24 16:26:32,680 ERROR pid=21096 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_event_hub.py", line 92, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_azure_event_hub.py", line 113, in collect_events partition_ids = client.get_partition_ids() File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 163, in get_partition_ids return self.get_properties()['partition_ids'] File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 146, in get_properties response = self._management_request(mgmt_msg, op_type=b'com.microsoft:eventhub') File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 127, in _management_request self._handle_exception(exception, retry_count, max_retries) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 105, in _handle_exception _handle_exception(exception, retry_count, max_retries, self) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/error.py", line 196, in _handle_exception raise error ConnectError: Unable to open management session. Please confirm URI namespace exists. Unable to open management session. Please confirm URI namespace exists. Collapse . . . . 2020-06-28 16:36:31,154 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Failure: getaddrinfo failure 66.' ('/data/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/adapters/socketio_berkeley.c':'lookup_address_and_initiate_socket_connection':283) 2020-06-28 16:36:31,155 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'lookup_address_and_connect_socket failed' ('/data/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/adapters/socketio_berkeley.c':'socketio_open':766) 2020-06-28 16:36:31,155 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Closing tlsio from a state other than TLSIO_STATE_EXT_OPEN or TLSIO_STATE_EXT_ERROR' 2020-06-28 16:36:31,155 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Invalid tlsio_state. Expected state is TLSIO_STATE_OPENING_UNDERLYING_IO.' ('/data/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/adapters/tlsio_openssl.c':'on_underlying_io_open_complete':760) 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Failed opening the underlying I/O.' ('/data/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/adapters/tlsio_openssl.c':'tlsio_openssl_open':1258) 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'xio_open failed' ('/data/src/vendor/azure-uamqp-c/src/saslclientio.c':'saslclientio_open_async':1097) 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Opening the underlying IO failed' ('/data/src/vendor/azure-uamqp-c/src/connection.c':'connection_open':1344) 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=connection.py:_state_changed:177 | Connection 'e3accde9-ddf9-48cc-8b99-b7b60b83596b' state changed from <ConnectionState.START: 0> to <ConnectionState.END: 13> 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=connection.py:_state_changed:181 | Connection with ID 'e3accde9-ddf9-48cc-8b99-b7b60b83596b' unexpectedly in an error state. Closing: False, Error: None 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Begin session failed' ('/data/src/vendor/azure-uamqp-c/src/link.c':'link_attach':1154) 2020-06-28 16:36:31,156 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Link attach failed' ('/data/src/vendor/azure-uamqp-c/src/message_receiver.c':'messagereceiver_open':362) 2020-06-28 16:36:31,156 DEBUG pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | Management link open: 1 2020-06-28 16:36:31,157 INFO pid=19209 tid=MainThread file=mgmt_operation.py:__init__:65 | 'Failed opening message receiver' ('/data/src/vendor/azure-uamqp-c/src/amqp_management.c':'amqp_management_open_async':981) 2020-06-28 16:36:31,958 INFO pid=19209 tid=MainThread file=error.py:_handle_exception:233 | u'eventhub.pysdk-3ff58abd' has an exception (AMQPConnectionError('Unable to open management session. Please confirm URI namespace exists.',)). Retrying... 2020-06-28 16:36:31,958 DEBUG pid=19209 tid=MainThread file=client.py:close:295 | Closing non-CBS session.  
Upgrading to VERSION=8.0.4.1 I'm getting this error running as ROOT!! Migrating to: VERSION=8.0.4.1 BUILD=ab7a85abaa98 PRODUCT=splunk PLATFORM=Linux-x86_64 Copying '/opt/splunk/etc/myinstall/spl... See more...
Upgrading to VERSION=8.0.4.1 I'm getting this error running as ROOT!! Migrating to: VERSION=8.0.4.1 BUILD=ab7a85abaa98 PRODUCT=splunk PLATFORM=Linux-x86_64 Copying '/opt/splunk/etc/myinstall/splunkd.xml' to '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak'. Checking saved search compatibility... Handling deprecated files... Checking script configuration... An unforeseen error occurred: Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/system/local/inputs.conf.tmp' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1353, in <module> sys.exit(main(sys.argv)) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1206, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1061, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 30, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3094, in autoMigrate migInputs_3_3_0(migInputsConf, dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 1682, in migInputs_3_3_0 comm.sed(policySearch, policyReplace, path, inPlace = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli_common.py", line 1232, in sed outFile = open(tmpPath, 'w') PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/system/local/inputs.conf.tmp' Please file a case online at http://www.splunk.com/page/submit_issue Error running pre-start tasks Root is how I always run the upgrade process yet never have run into this issue before!
Hi! I have a panel that displays projects with field like project_id and project_title. Each project has tasks (task_id). How do i calculate the number of tasks in the same project? Thank you!
The following alert does not trigger in both time and frequency. The goal was a check every 10 minutes for the last 10 minutes,  but although I performed several tests and the data were there  (aler... See more...
The following alert does not trigger in both time and frequency. The goal was a check every 10 minutes for the last 10 minutes,  but although I performed several tests and the data were there  (alert's search verified) - I could see a single record on form "Triggered Alerts" only after 16 hours. Spunk is 7.1 [Access Control Errors] alert.severity = 4 alert.suppress = 0 alert.track = 1 counttype = number of events cron_schedule = 0 * * * * description = Access Control Errors dispatch.earliest_time = -10m dispatch.latest_time = now display.general.type = statistics display.page.search.tab = statistics enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = omega_core_audit request.ui_dispatch_view = search search = `mc_unf_md` | search SEO_ACC=1 RETURNCODE!=0 the same alert (and with data from same program) works OK on Splunk 6.4 and with an 1 hour frequency, and with an e-mail action [Omega CA Access Control Errors] action.email = 1 action.email.include.trigger_time = 1 action.email.inline = 1 action.email.sendresults = 1 action.email.to = altin.karaulli@unionbank.al action.email.useNSSubject = 1 alert.suppress = 0 alert.track = 0 counttype = number of events cron_schedule = 0 * * * * description = Omega CA Access Control Errors dispatch.earliest_time = -1h dispatch.latest_time = now display.events.fields = ["host","source","sourcetype","ACTION_ID","ACTION_NAME","USERNAME","USERHOST","OS_PROCESS","OBJECT_NAME","DB_NAME","POLICY_TYPE_CODE","OS_USER","RETURNCODE","TERMINAL"] display.events.type = table enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = omega_core_audit request.ui_dispatch_view = search search = index=omega_ca SEO_ACC=1 RETURNCODE!=0 Is there anything wrong in the first 7.1 triggered alert case? best regards Altin
Hello is there a length limit in the search.? I have been using NOT operator in my query extensively due to error code search. after some length.  NOT operator is greyed out?  it will become enable ... See more...
Hello is there a length limit in the search.? I have been using NOT operator in my query extensively due to error code search. after some length.  NOT operator is greyed out?  it will become enable when I delete lengthy error code sentence from the search thanks Raj
Hello All, We do have an centralized syslog receiver named "spl-fwdser" which receives the logs from various devices and pushes into Splunk instances. Recent days triggering so many auto incidents f... See more...
Hello All, We do have an centralized syslog receiver named "spl-fwdser" which receives the logs from various devices and pushes into Splunk instances. Recent days triggering so many auto incidents for "Disk Space" issue with the description saying "Failed to run script". I checked and found there is no disk space on the server. Now all the incidents which has been triggered has been auto closed by ServiceNOW ticketing system with notes saying "Incident resolved due to clear alert from monitoring". Please let me know what causing this server to trigger so many auto incidents and how to fix this issue. Thanks Everyone.
I've recently started a new job, of which one of my duties is to take over managing splunk from the previous administrator who left short notice. I've got SOME lab knowledge of splunk, and have been ... See more...
I've recently started a new job, of which one of my duties is to take over managing splunk from the previous administrator who left short notice. I've got SOME lab knowledge of splunk, and have been a user, but never an administrator so I'm not really sure if we're meeting best practice with this current architecture. Our current design is 1 x (SearchHead/Indexer), and three search peers, all of which are configured within 'Distributed Search -> Search Peers'. The problem is that whenever I search any events, I only see events from distributed peers, and not from the indexer running on the search head. I'm able to verify that it is indeed indexing data, but I have to explicitly add the SPL 'splunk_server=*'. Is this due to my current configuration, or is it possible I'm missing something in the configs? Appreciate any help anyone can provide!
I configured appdynamics for monitoring my java app running on Payara server. My application talks to multiple databases, including Oracle, MySql, MS SqlServer and EDB (PostgreSQL).  The backend auto... See more...
I configured appdynamics for monitoring my java app running on Payara server. My application talks to multiple databases, including Oracle, MySql, MS SqlServer and EDB (PostgreSQL).  The backend auto discovery feature picks up all JDBC calls except for EDB.  EDB is a custom version of PostgreSQL and we use EDB JDBC driver for application connection to the DB. I don't see any calls to the EDB DB. Is there anyway to modify the discovery to detect the JDBC calls to EDB DB? Thanks!
I'm trying to delete dups using this method here: https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-logs/td-p/300343 But because I have so many dups (>10k), it doesn't delete the... See more...
I'm trying to delete dups using this method here: https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-logs/td-p/300343 But because I have so many dups (>10k), it doesn't delete them in one run. Is there another way to delete everything at once?
I would like some advice on what is the best way to implement the following solution. I would like to get netflow data to Splunk Enterprise running on Windows server that is coming from cisco device... See more...
I would like some advice on what is the best way to implement the following solution. I would like to get netflow data to Splunk Enterprise running on Windows server that is coming from cisco devices using netflow-exporter. The problem is windows has a winpcap vulnerability and I would rather not use any add ons that contain that wpcap.dll. My solution would be to set up a Linux server to use the universal forwarder to ingest the netflow data and parse that to Splunk enterprise on Windows without any add ons. I am having a hard timevdetermining if this is possible or will I require a TA on Splunk enterprise?
I have an array of objects containing  field componentType with value "Software" or "Licenses".  In the same object there is a field downloadCount expressing how many files were downloaded for that s... See more...
I have an array of objects containing  field componentType with value "Software" or "Licenses".  In the same object there is a field downloadCount expressing how many files were downloaded for that software / license.  I need to create a table where each row shows the total number of file downloads for both software and licenses per array of objects. e.g. Software Downloads.    License Downloads 5                                                1 0                                               0 ...                                               ... here is how one row of the data looks. [ {componentType=Software, downloadCount=2}, {componentType=License, downloadCount=1}, {componentType=Software, downloadCount=3} ] Any help is appreciated
Hello All, I want to extend my free trial. Whom should I be contacting for help in regards to extending the trial license. Regards, Nawaz
We are getting the below error in the "splunk_ta_dynatrace_dynatrace_timeseries_metrics.log" log after setting up a new input in the "Dynatrace Add-on for Splunk" app and setting it up to collect a t... See more...
We are getting the below error in the "splunk_ta_dynatrace_dynatrace_timeseries_metrics.log" log after setting up a new input in the "Dynatrace Add-on for Splunk" app and setting it up to collect a timeseries metric. Has anyone seen this issue before or had any other issues setting up the add-on app?     2020-06-26 14:08:49,784 ERROR pid=12302 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_Dynatrace/bin/splunk_ta_dynatrace/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/Splunk_TA_Dynatrace/bin/dynatrace_timeseries_metrics.py", line 76, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/Splunk_TA_Dynatrace/bin/input_module_dynatrace_timeseries_metrics.py", line 179, in collect_events send_data() File "/opt/splunk/etc/apps/Splunk_TA_Dynatrace/bin/input_module_dynatrace_timeseries_metrics.py", line 130, in send_data data = response.json() File "/opt/splunk/etc/apps/Splunk_TA_Dynatrace/bin/splunk_ta_dynatrace/aob_py3/requests/models.py", line 897, in json return complexjson.loads(self.text, **kwargs) File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 6 column 1 (char 5)      
As anyone had anyone had luck automating the management of the conf files for the deployment app using Azure DevOps?
Is there a way to get detailed logs from the Website Monitoring app that will tell me what IP it was routed to when running its tests?   @LukeMurphey??
Hi,   I am contacting you on behalf of the Verizon media team. We need to do some testing for some of our customers who are using Splunk Cloud and I was wondering if we could switch the trial accou... See more...
Hi,   I am contacting you on behalf of the Verizon media team. We need to do some testing for some of our customers who are using Splunk Cloud and I was wondering if we could switch the trial account to a non-billable, partnership account. For our testing, we use almost zero logs so there won't be any load to your system. It would be great if you can direct me to the right team to get this sorted out. Thank you
I have encountered a problem where I cannot get the Splunk service to start after changing The $SPLUNK_DB variable in /opt/splunk/etc/splunk-launch.conf.   What I’ve tried and further background in... See more...
I have encountered a problem where I cannot get the Splunk service to start after changing The $SPLUNK_DB variable in /opt/splunk/etc/splunk-launch.conf.   What I’ve tried and further background information:   I have verified that the following steps work successfully if the $SPLUNK_DB variable is NOT set. In other words, it defaults to $SPLUNK_HOME/var/lib/splunk   systemctl stop Splunkd.service systemctl start Splunkd.service   But once I edit the $SPLUNK_DB variable, I cannot get Splunk to start. Likewise, Splunk will not start after reboot if the $SPLUNK_DB is set. It will start after reboot if this variable is not set.   The $SPLUNK_DB variable is set to /mnt/splunk, a CIFS share that I have verified is mounted and can be accessed by the system. (For the curious, this is a testing environment for me to learn Splunk. Splunk is installed on a small NUC with a decent processor and RAM but there’s a single consumer SSD drive with limited space. The CIFS share is on a NAS with multiple terabytes of extra space. I know performance won’t be great, but then, neither will the flow of data.)   Next I tried switching to the splunk user (because that seems to be the user that owns the files in the /opt/splunk directory), to see if the issue was a permissions problem. I used sudo su - splunk. I verified that I can indeed create, write, and read, and delete files from /mnt/splunk as the splunk user, root user, and my personal user on Linux. Conclusion: it doesn’t seem to be a permissions problem.   Curiously, when I changed the conf file while splunk was running, Splunk created a series of directories and subdirectories inside /mnt/splunk. I can see top level directories of audit, authDb, and hashDb. (There’s no data in them as I don’t have Splunk setup to receive any data yet.)   I tried the following search of all the log files hoping I would find clues about why this database path was causing me trouble.   /opt/splunk/var/log/splunk# cat *.log | grep 'mnt/splunk'   It found nothing. (But if I search instead for the default db path, 'var/lib/splunk', I find dozens or hundreds of entries. (So the search works.)   I’m at a loss. Are there other steps I should take beyond changing the path to $SPLUNK_DB? Is there anything I can do to understand why Splunk isn’t starting?
Considering the following two messages:   sourcetype="PCF:log" cf_app_name=app1 msg="launch processing started" UserID: ABC sourcetype="PCF:log" cf_app_name=app1 msg="flow complete" UserID: ABC  ... See more...
Considering the following two messages:   sourcetype="PCF:log" cf_app_name=app1 msg="launch processing started" UserID: ABC sourcetype="PCF:log" cf_app_name=app1 msg="flow complete" UserID: ABC     I want to capture the elapsed time between the earliest occurrence of "launch processing started", and the latest occurrence of "flow complete", by matching on UserID (which I've regex extracted to a field).  How would I approach this?  Edit: I should mention this is timeboxed to one day. If a user launches but doesn't complete in a day, I don't care.  
Hi Splunkers, I have a testing project in progress to create multiples security dashboards from Microsoft Windows endpoints. For this one, I need to create a dashboard to display the threat detecte... See more...
Hi Splunkers, I have a testing project in progress to create multiples security dashboards from Microsoft Windows endpoints. For this one, I need to create a dashboard to display the threat detected on each device. My issue is I have actually no control on the McAfee server but I have only the McAfee following log files (%ProgramData%\McAfee\Endpoint Security\Logs) : EndpointSecurityPlatform_Activity.log SelfProtection_Activity.log AccessProtection_Activity.log ThreatPrevention_Activity.log ExploitPrevention_Activity.log OnDemandScan_Activity.log I added the files on the Spunk database but as I never get infected and I really don't know how the logs are working, I can't create my dashboard... Do you have clues about how to detect threat/malware/virus within the previous files to be able to create my dashboard ? Thanks, Splunk experts are really rare. Kevin