Hi, We are in the process of migrating our indexes/alerts/reports/dashboards from us-east1 to ca-central1 and I would like to know if there's a way to port all the alerts/reports/dashboards without r...
See more...
Hi, We are in the process of migrating our indexes/alerts/reports/dashboards from us-east1 to ca-central1 and I would like to know if there's a way to port all the alerts/reports/dashboards without redoing them manually. Thank you,
We have an issue where Splunk is not showing vulnerabilities in fixed state, but Tenable Cloud has the correct information. So we're likely losing data between Tenable Cloud and Splunk at least for t...
See more...
We have an issue where Splunk is not showing vulnerabilities in fixed state, but Tenable Cloud has the correct information. So we're likely losing data between Tenable Cloud and Splunk at least for the update of the fixed status of vulnerabilities, but I have no clue why. Logs look clean, we're getting data. And the issue cannot be on Tenable side since they have the correct data and we collect that right off the API. Any ideas?
I an creating a home test Lab with Splunk and i am was trying to install Splunk Universal Forwarder on one of my machines but the error below keeps popping up saying "Certificate file path is not pro...
See more...
I an creating a home test Lab with Splunk and i am was trying to install Splunk Universal Forwarder on one of my machines but the error below keeps popping up saying "Certificate file path is not provided". My previous installations did not as for certificate file path. Do i require to acquire some certificate in order to install the forwarders please?
As a Splunk admin I am able to add a js file to a customers app via "Edit Properties" of the app and then "Upload asset" however the application owners don't have permission, they can however create ...
See more...
As a Splunk admin I am able to add a js file to a customers app via "Edit Properties" of the app and then "Upload asset" however the application owners don't have permission, they can however create other knowledge objects, lookups, alerts, etc. Which capability turns the feature "Upload asset" off/on or is there a setting that we've turned on that blocks it? We may have it disabled or blocking it for security purposes but would like to turn it on in our development environment for our customers doing content development. Thanks!
Hello, I have this table with a null space, want to rename it for "No releases" but the rename and fillnull functions did not worked. Is there any way to do that?
Hello all, When we try to create a Cisco AMP4ep input, it is not allowing us to create one. The save button isn't working, see attached. I tried to create the input, but it is not working either. Se...
See more...
Hello all, When we try to create a Cisco AMP4ep input, it is not allowing us to create one. The save button isn't working, see attached. I tried to create the input, but it is not working either. See the attachment. Splunk Version : 9.0.4.1 Cisco AMP for endpoints input version : 3.0.0 Current input(created manually) ------------------------------------------- [amp4e_events_input] api_host = api.amp.cisco.com api_id = API pin disabled = 0 eai_app_name = search eai_user_name = admin rcvbuf = 1572864 [amp4e_events_input://SPLUNK] api_host = api.amp.cisco.com api_id = api pin index = my_index source = amp4e_events_input://cisco_amp sourcetype = cisco:amp:event stream_name = Splunk_amp4ep Can anyone help with the correct input? Regards, Nav
Hi Team, Could you please guide how I can fetch the below keywords from raw logs: 2023-06-29 09:41:53.884 [INFO ] [pool-2-thread-1] ArchivalProcessor - finished reading file /absin/TRIM.ARCH.D0629...
See more...
Hi Team, Could you please guide how I can fetch the below keywords from raw logs: 2023-06-29 09:41:53.884 [INFO ] [pool-2-thread-1] ArchivalProcessor - finished reading file /absin/TRIM.ARCH.D062923.T052525 2023-06-28 10:36:24.064 [INFO ] [pool-2-thread-1] ArchivalProcessor - finished reading file / absin/TRIM.ARCH.D062823.T063718 2023-06-29 09:38:03.308 [INFO ] [pool-2-thread-1] ArchivalProcessor - Processing archival records for file TRIM.ARCH.D062923.T052525 Can someone guide me how can I fetch from raw logs.
Hello, We had a customer stumble across this issue recently. I tried changing the default_auto_cancel from 30 to 62 but after restarting splunkweb I could still pretty much consistently cause searc...
See more...
Hello, We had a customer stumble across this issue recently. I tried changing the default_auto_cancel from 30 to 62 but after restarting splunkweb I could still pretty much consistently cause searches to be cancelled by switching tabs. I checked Edge, Chrome, and Firefox. On Firefox the issue did not occur. My question is there another attribute I could try changing to fix this in Chrome and Edge? If there isn't an attribute then what browser setting would need to be changed to correct this? thanks https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/Knownissues 2021-12-21 SPL-216787 Searches are cancelled or time out when the user leaves the browser window or switches tabs. Workaround: In Splunk Enterprise 8.1.7, 8.2.4, and higher change the job_default_auto_cancel setting in $SPLUNK_HOME/etc/system/local/web.conf from the default value of 30 to 62
So I'm ingesting advanced hunting logs into Splunk and one of the interesting fields is properties.InitiatingProcessSHA1, which is a hash of whatever file (properties.InitiatingProcessVersionInfoOrig...
See more...
So I'm ingesting advanced hunting logs into Splunk and one of the interesting fields is properties.InitiatingProcessSHA1, which is a hash of whatever file (properties.InitiatingProcessVersionInfoOriginalFileName) is being run on the end user's machine. I want to be able to extract the SHA1 value and FileName value into a KV Store and then be able to make queries against that KV store. For example, say there is a SHA1 hash for a file called text_program.exe, which has appeared in the logs: SHA1: 111111111111 FileName: test_program.exe I'd like to be able to extract the hash value as well as the file name into a KV Store, and make queries against the KV store. Because in the event that someone clicks on a phishing email and accidently downloads a program called text_program.exe and it has a hash value of 22222222222, I can investigate this. I'm just wondering what the best way to tackle this would be
Dear community, Until yesterday syslog-ng in ubuntu suddenly stopped sending logs toward splunk. I have restarted the syslog-ng services, splunk, and splunkforwarder service but still nothing. An...
See more...
Dear community, Until yesterday syslog-ng in ubuntu suddenly stopped sending logs toward splunk. I have restarted the syslog-ng services, splunk, and splunkforwarder service but still nothing. Any idea for troubleshooting ? Thank You
I am trying to upgrade 30 Universal Forwarders from 7.1.2 to 9.0.2 on Linux. I've read several responses here including this one: https://community.splunk.com/t5/Installation/Scripted-install-thrown...
See more...
I am trying to upgrade 30 Universal Forwarders from 7.1.2 to 9.0.2 on Linux. I've read several responses here including this one: https://community.splunk.com/t5/Installation/Scripted-install-thrown-by-upgrade-prompt/m-p/46594 but it is not working for me. Here is what I have tried: /opt/splunkforwarder/bin/splunk start --accept-license --answer-y /opt/splunkforwarder/bin/splunk start --accept-license --answer-y --no-prompt Installing manually (from the command line and answering the prompt) works, but I really want to script this. But first I have to figure out how to avoid getting the upgrade prompt. Are there any other ideas to help me do this?
I have created one linux uptime or reboot alert if the server gets rebooted it will trigger the alert if the uptime<=600 secs . Now , I have one server where the uptime value is 0sec, and then after...
See more...
I have created one linux uptime or reboot alert if the server gets rebooted it will trigger the alert if the uptime<=600 secs . Now , I have one server where the uptime value is 0sec, and then after 5mins the uptime data changes to 300 and again after 5 mins it changed to 600 secs. It was suppose to throw an alert but it didnt where as the alert only took the data of the first uptime value of the server which is 0 secs. The data directly comes from the server there is no script inline.
Upgraded several independent instances of Splunk Enterprise from various starting points, all to 9.1.0.1. Some clustered, some standalone. 8.1 -> 9.1.0.1 9.0.1 -> 9.1.0.1 All had the same outc...
See more...
Upgraded several independent instances of Splunk Enterprise from various starting points, all to 9.1.0.1. Some clustered, some standalone. 8.1 -> 9.1.0.1 9.0.1 -> 9.1.0.1 All had the same outcome: When browsing to: Settings > Users and Authentication > Users, most but not all users are no longer visible in the 'Users' list, but the users still have access as validate by Splunk logs. In the most severe case there were 100+ users, mostly SAML, some local. Post upgrade there are 4 showing, yet in validation all can still login
Hi, In one of my index data populating and all fields and showing until i uploaded one csv file to that index. After that my actual data is not populating. Can someone help us how to get original d...
See more...
Hi, In one of my index data populating and all fields and showing until i uploaded one csv file to that index. After that my actual data is not populating. Can someone help us how to get original data.
I am trying to use a radial gauge graph in order to show a % using avg(cpu_metric.Idle). However, I want the "reverse" value of the cpu_metric.Idle. So what I am attempting to do is | mstats ..........
See more...
I am trying to use a radial gauge graph in order to show a % using avg(cpu_metric.Idle). However, I want the "reverse" value of the cpu_metric.Idle. So what I am attempting to do is | mstats .................. hostname.......... | chart count(eval( 100 - avg(cpu_metric.Idle)) as name Basically I am trying to show " 100 - avg(cpu_metric.Idle)" on a gauge and the only way for me to get the new value is doing " | chart avg(cpu_metric.Idle) as name | eval new = 100 - name " but I cant put the eval value onto a chart.
Need to monitor highlighted log file without extension in the mentioned folder. Adminportal_* and 584 will vary accordingly for other build in Jenkins. I have defined path in different forma...
See more...
Need to monitor highlighted log file without extension in the mentioned folder. Adminportal_* and 584 will vary accordingly for other build in Jenkins. I have defined path in different formats in inputs.conf file of Deployment Server. But its not getting indexed. Here is the content from the inputs.conf from the respective deployment app. # scan Jenkins Build logs [monitor://D:\Jenkins_Home\...\*] --- ?? what is the exact path needs to be defined here ?? disabled = false recursive = true #time_before_close = 5 #ignoreOlderThan = 24h index = jenkins_logs sourcetype=jenkins:javalog Thank you !!!
Hello, I have application which ends specific kind of log. Every log have a jobId field and additional information" returned: 1" or "returned: 0". For one jobId program can return a lot of "returne...
See more...
Hello, I have application which ends specific kind of log. Every log have a jobId field and additional information" returned: 1" or "returned: 0". For one jobId program can return a lot of "returned: 1" and only one "returned: 0" logs. I want to get dashboard for daily count of jobId, but I want to exclude jobId number, when one of logs contain "returned: 1". I write something like that: $env$ $project$ "jobId" AND NOT "returned: 0" | timechart span=24h dc(jobId) but this only exclude logs, where is only "returned : 0" for one jobId. Is there possibility to get this dashboard?
1q) i have my search starting with earliest=-1mon latest=now() i want to get the dates as startdate = earliest and end date = latest and caluclate the number of days between them 2Q) Also when we ...
See more...
1q) i have my search starting with earliest=-1mon latest=now() i want to get the dates as startdate = earliest and end date = latest and caluclate the number of days between them 2Q) Also when we use timeticker how can i get the start and end time as fields in the search when we use option like last 7 days
Hi all i have a search running with the following results date_year count 2022 44,814 how do i get the average count over the year? I've tried...
See more...
Hi all i have a search running with the following results date_year count 2022 44,814 how do i get the average count over the year? I've tried to eval date_year by 12 but this doesnt look right. Also have | timechart avg(date_year) and this is not working out any ideas?
Hello, When I use the VT4Splunk application in a search, I always get the following error: Unexpected error when enriching IoC: '_last_correlation_date'. This doesn't prevent me from getting a resu...
See more...
Hello, When I use the VT4Splunk application in a search, I always get the following error: Unexpected error when enriching IoC: '_last_correlation_date'. This doesn't prevent me from getting a result, but if anyone knows how to get rid of this error, I'd love to hear from you. Thanks. Gramy