All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.ab... See more...
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.abc.com/mods/stdout.240206-084344 /abc-logs/hoste/mods/stdout.240513-070854 when I am trying monitor this path to get logs into splunk .I only get two files .when checked internal logs i see following errors 05-16-2024 10:07:25.609 -0700 ERROR TailReader [1846912 tailreader0] - File will not be read, is too small to match seekptr checksum (file=/abc-logs/hosta/mods/stdout.240513-070854).  Last time we saw this initcrc, filename was different.  You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. A possible timestamp match (Fri Feb 13 15:31:30 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: FileClassifier C:\abc-logs\hostd.a.clusters.abc.com\mods\stdout.240206-084344 I am using below props [ mods ] BREAK_ONLY_BEFORE_DATE=null CHARSET=AUTO CHECK_METHOD=entire_md5 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+) MAX_DAYS_AGO =2000 MAX_DAYS_HENCE=365 NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom crcSalt=<SOURCE> initCrcLength = 1048576 i tried changing the CHECK_METHOD to other options but it did not work  Thanks in advance 
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field ... See more...
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field issue by disabling KV_MODE. I was thinking of adding something like this to the app props.conf but I am still looking better options.     INDEXED_EXTRACTIONS =  KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\[\]\,]+\s*)([\{])
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="... See more...
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | eval userid=upper(userid) | stats c as Count latest(_time) as _time by userid I get an output as this ABCDEF ASSIGNED ROLE-'READ' ON PROJECT-1234 TO GHIJKL   What I want is to search on just the GHIJKL after it extracts or should I just put it at the front so it only fetches that?
Pls what is the rest endpoint for searches that users are running 
I want a query that shows  the total volume of indexes used for splunk searches. Query on information that has to do with how much indexes are used based on splunk searches     
Can i get a query that will find searches that users are running in splunk
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to lo... See more...
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to look like this: column new_row search_name UC-315 ID 7zAt/7 Time 13:27:17 search_name UC-231 ID 5Dfxdf Time 09:17:09 This should work independently of the amount of rows. I've tried using mvexpand, and streamstats but without any luck.
Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid ... See more...
Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid type weight anomaly? 1 a 100kg   2 a 102kg   3 b 500kg   4 b 550kg   6 a 15kg yes 7 b 2500kg yes   One option would be solving this by calculating average and standard deviation:   index=products | stats list("productweight") as weights by "producttype" | mvexpand weights | eval weight=tonumber(weights) | eventstats avg(weight) as avg stdev(weight) as stdev by "producttype" | eval lowerBound=(avg-stdev*10), upperBound=(avg+stdev*10) | where weight < lowerBound OR weight > upperBound But I was wondering whether there is a way to solve this with the anomalydetection function. The function should search for anonalies within the products of the same producttype and not general for all weights on available.  Something like | anomalydetection by "producttype" but this option doesnt seem to be available. Does somebody know how to do this? Many thanks in advance for your help
Hello, I'm trying to new chart as calculate through packet count. I search with query for interface for several device. I could show as follow. _time interface-A Interface-B interface-C  ... See more...
Hello, I'm trying to new chart as calculate through packet count. I search with query for interface for several device. I could show as follow. _time interface-A Interface-B interface-C  9:00 100 200 100 9:10 150 250 100 9:20 200 300 100 I would like add Interface A+B-C for column as follow _time interface-A Interface-B interface-C  Interface A+B-C 9:00 100 200 100 200 9:10 150 250 100 300 9:20 200 300 100 400 How can I make it?  
We've run into a few occassions where one of our network devices stops sending logs to Splunk. I have a tstats search based on the blog post here: https://www.splunk.com/en_us/blog/tips-and-tricks/ho... See more...
We've run into a few occassions where one of our network devices stops sending logs to Splunk. I have a tstats search based on the blog post here: https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-determine-when-a-host-stops-sending-logs-to-splunk-expeditiously.html Here is the search expression I'm using: | tstats latest(_time) as latest where index=index_name earliest=-1d by host | eval recent = if(latest > relative_time(now(),"-15m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 My tstats search does return the hosts that have not sent any logs, but it never triggers when I use this search in an Alert. I noticed that the search only shows the hosts in the Statistics view and there are no Events. Is this why my event is not triggering? I've found several other examples on this forum of people using tstats to detect when a host stops sending logs. Is there something special they are configuring in their Alert to trigger off of the statistics results?
Hello, I'm using TrackMe Free Edition 2.0.92 on my test env (single instance with 2 UF on debian 11). I'm able to create vtenant, but I do not see any of them on the Vtenant page : Yet, they a... See more...
Hello, I'm using TrackMe Free Edition 2.0.92 on my test env (single instance with 2 UF on debian 11). I'm able to create vtenant, but I do not see any of them on the Vtenant page : Yet, they are listed in the configuration tab : I cannot access the "pop in" to manage any of the tenant specs. This behaviour was already in place with previous version of trackme v2. I checked logs, trackme logs, restarted the instance, updated the app, checked the browser logs (har files), removed then installed again the app, tried to remove banner, deactivated  then reactivated library restrictions, checked limits : all without success. I don't have any more ideas. My prod env is distributed and do not have the issue. I'm sure I did something wrong somewhere, but I cannot pinpoint where. Could you please suggest some leads ? Thanks, Ema
https://splunkbase.splunk.com/app/4564 Hi All, want to know the status on usage of particular app ,as we are seeing app being deprecated ,is there any alternate app/addon in leveraging the same func... See more...
https://splunkbase.splunk.com/app/4564 Hi All, want to know the status on usage of particular app ,as we are seeing app being deprecated ,is there any alternate app/addon in leveraging the same functionality. Current App stopped working  Regards Teja 
Hi everyone, I'm trying to forward Sysmon event logs from a Windows Server to Splunk with a Universal Forwarder installed on the Windows machines. I've successfully forwarded security event logs wit... See more...
Hi everyone, I'm trying to forward Sysmon event logs from a Windows Server to Splunk with a Universal Forwarder installed on the Windows machines. I've successfully forwarded security event logs with the same forwarder, so I'm confident there are no network connectivity issues. Sysmon events are created as expected and exist in the Event Viewer. In my setup, I'm sending Sysmon events from my Windows clients to a WEF server, which collects all the logs. This part works fine. My Splunk deployment is a single server deployed on Rocky Linux. I installed the Splunk UF with a network user account, so it should have access to any event log. When I try to add a new "Windows Event Logs" input, I only have options to choose from the following event channels: Application ForwardedEvents Security Setup System I've tried adding the input manually to the app in the file located at: /opt/splunk/etc/deployment-apps/_server_app_WindowsServers/local/inputs.conf Security logs are sent, but Sysmon logs are not. Here's the content of the file: [WinEventLog://Security] index = win_servers [WinEventLog://Microsoft-Windows-Sysmon/Operational] index = win_servers checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest renderXml = true I've tried various options following some tutorials, but nothing worked. I also tried copying the content of this file to $SPLUNK_HOME\etc\apps_server_app_WindowsServers on the Windows server with the UF, but the results are the same. Any insights into this issue would be greatly appreciated. I'm sure I'm missing something here. Thank you in advance, Yossi
Hi, I have set up deployment server. When I checked splunkd_access.log , it shows successful phonehome connection from Heavy Forwarder. I can also see app getting deployed in  deployment clients. ... See more...
Hi, I have set up deployment server. When I checked splunkd_access.log , it shows successful phonehome connection from Heavy Forwarder. I can also see app getting deployed in  deployment clients. But when I do ./splunk list deploy-clients, it is showing "No deployment clients have contacted this server". What is going wrong here ? Please can anyone of you help me. Regards, PNV
SPL Query: | getservice | search algorithms=*itsi_predict_* I want to extract the algorithms and then outputlookup the model_id of the model where recommended:True     Please sugges... See more...
SPL Query: | getservice | search algorithms=*itsi_predict_* I want to extract the algorithms and then outputlookup the model_id of the model where recommended:True     Please suggest how do I do thiS?      
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server a... See more...
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server and the apps which are not being logged are not directly located under /opt/splunk/etc/apps. Instead we do only have symbolic links to another folder on the system. It works for everything else, but the configuration tracker seems to ignore symbolic links. It is also not a permission issue of the linked folder. The linked folder has the same splunk group and permissions assigned.   /opt/splunk/etc/apps/symboliclinkapp     ->   /anotherfolder/symboliclinkapp   Is there an option to change the configuration tracker to also consider symbolic links?
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv ... See more...
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv | fields host | join type=inner host [search index=unix | stats latest(_time) as latest_time, latest(source) as source, latest(_raw) as event by host | convert ctime(latest_time) as latest_time] | table host, latest_time, source, event and it displays like this one: host latest_time source event hostname_a*       hostname_b*       hostname_c*       I assume that the wildcard "*" is acting like a literal string. I'm expecting results like this. host latest_time source event hostname_a12 test test test hostname_a23 test test test hostname_c123 test test test please help thanks!
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how... See more...
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how to troubleshoot this issue? Thanks!
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup  ... See more...
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup   index=student_grade | lookup student_info.csv No AS No OUPUTNEW Name Address   Below is my "future" search query using DBXLookup Is it going to be this simple? Please share  your experience.  Thank you so much   index=student_grade | dbxlookup connection="studentDB" query="SELECT * FROM student_info" No AS No OUTPUT Name, Address   index=student_grade No Class Grade 10 math A 10 english B 20 math B 20 english C student_info.csv No Name Address 10 student10 Address10 20 student20 Address20   No Class Grade Name Address 10 math A student10 Address10 10 english B student10 Address10 20 math B student20 Address20 20 english C student20 Address20
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recov... See more...
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recovering.  The addon can't start the input again, because the input reports that the PID is already running. Add-on is also upgraded to version: Qualys Technology Add-on (TA) for Splunk | Splunkbase Version: 1.11.4 data inputs having issues: host detection, policy_posture_info Please suggest to fix the issue.   TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] ERROR: Another instance of policy_posture_info is already running with PID 2*****. I am exiting. TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] INFO: Earlier Running PID: 2*****