All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi   I want to how to change from the archive state to active of splunkbase site. I have already submitted a new version of  the addon file. The addon is still in archive status.
One minor request, if this logging is ever enhanced can it please include the output group name. 05-16-2024 03:18:05.992 +0000 WARN AutoLoadBalancedConnectionStrategy [85268 TcpOutEloop] - Current d... See more...
One minor request, if this logging is ever enhanced can it please include the output group name. 05-16-2024 03:18:05.992 +0000 WARN AutoLoadBalancedConnectionStrategy [85268 TcpOutEloop] - Current dest host connection <ip address>:9997, oneTimeClient=0, _events.size()=56156, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Thu May 16 03:18:03 2024 is using 31477941 bytes. Total tcpout queue size is 31457280. Warningcount=1001 Is helpful, however the destination IP happens to be istio (K8s software load balancer) and I have 3 indexer clusters with different DNS names on the same IP/port (the incoming DNS name determines which backend gets used). So my only way to "guess" the outputs.conf stanza involved is to set a unique queue size for each one so I can determine which indexer cluster / output stanza is having the high warning count. If it had tcpout=<stanzaname> or similar in the warning that would be very helpful for me. Thanks
As always, the answer depends a lot on data characteristics and the real problem you are trying to solve.  Maybe you can explain why the second look, which is highly unconventional, is more desirable... See more...
As always, the answer depends a lot on data characteristics and the real problem you are trying to solve.  Maybe you can explain why the second look, which is highly unconventional, is more desirable?  Is it safe to say that search_name, ID, and Time are a triplet that should be treated as a unit?  In that case, wouldn't this form be more human friendly? Time search_name ID 13:27:17 UC-315 7zAt/7 13:27:17 UC-231  5Dfxdf (This, of course is the default time series aka Splunky presentation.)
To search on the resource field, use the where command. index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | where resource="GHIJKL" | eval userid=u... See more...
To search on the resource field, use the where command. index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | where resource="GHIJKL" | eval userid=upper(userid) | stats c as Count latest(_time) as _time by userid  
Per the outputs.conf.spec file, # These settings are only applicable under the global [tcpout] stanza. # This filter does not work if it is created under any other stanza.
Hi, Did you check sslVersions in authentication.conf and server.conf? Check that the SSL version is consistent among cluster members. Regards.
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am plannin... See more...
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am planning to forward only some logs to the new cluster based on the index name. For example, consider that between the three indexes of Windows, Linux, and Firewall, I'm going to send only Firewall to the new cluster. This is the configuration that I tried to create:   [tcpout] defaultGroup = dag,dag-n [tcpout:dag] disabled = false server = p0:X,p1:Y [tcpout:dag-n] disabled = false server = pn:Z forwardedindex.0.whitelist = firewall forwardedindex.1.blacklist = .*   but unfortunately, some logs of both Windows and Linux indexes are still sent to the new cluster, and because an index is not considered for them in the new cluster, they frequently cause errors. The thing that came to my mind was that maybe I should empty the default whitelist and blacklist first. Anyone have any ideas?
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config o... See more...
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config options in the App or Add-on that I can see.   Any thoughts?
I don't know much about role/permissions but I have created dozens of other alerts. The only difference I can tell is all my other alerts use searches that return events when triggered. This search... See more...
I don't know much about role/permissions but I have created dozens of other alerts. The only difference I can tell is all my other alerts use searches that return events when triggered. This search only returns results in the statistics section.
Hi If your DS is 9.2.x, have you read this https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225#M27893? r. Ismo
Hi this should works. Are you sure that you have granted a role which allow you to run alerts? r. Ismo
Hi there is order which defines how those are extracted, aliased etc. You can see it e.g. here https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence. Base... See more...
Hi there is order which defines how those are extracted, aliased etc. You can see it e.g. here https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence. Based on that you see that in extract phase you cannot use aliases as those are applied after all extractions have done. r. Ismo
Hi there some apps, which you could use to generate continuously sample data. Probably mostly used is eventgen https://splunkbase.splunk.com/app/1924.  One option is just e.g. tar that index and ... See more...
Hi there some apps, which you could use to generate continuously sample data. Probably mostly used is eventgen https://splunkbase.splunk.com/app/1924.  One option is just e.g. tar that index and untar it on target systems. Of course you need to do also app for defining it on those target systems. This needs that those nodes are enough equal like same Linux etc. If I recall right something like this was done for those older bots datasets? r. Ismo
Hi this is quite often asked question. You could find answers from community by google. But shortly, you couldn’t find that information from splunk audit logs. Here is couple links to tell more abo... See more...
Hi this is quite often asked question. You could find answers from community by google. But shortly, you couldn’t find that information from splunk audit logs. Here is couple links to tell more about the reason. https://community.splunk.com/t5/Splunk-Search/Data-used-by-searches/m-p/687785#M234581 https://community.splunk.com/t5/Splunk-Search/How-to-find-which-indexes-are-used/m-p/674510 r. Ismo
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.ab... See more...
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.abc.com/mods/stdout.240206-084344 /abc-logs/hoste/mods/stdout.240513-070854 when I am trying monitor this path to get logs into splunk .I only get two files .when checked internal logs i see following errors 05-16-2024 10:07:25.609 -0700 ERROR TailReader [1846912 tailreader0] - File will not be read, is too small to match seekptr checksum (file=/abc-logs/hosta/mods/stdout.240513-070854).  Last time we saw this initcrc, filename was different.  You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. A possible timestamp match (Fri Feb 13 15:31:30 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: FileClassifier C:\abc-logs\hostd.a.clusters.abc.com\mods\stdout.240206-084344 I am using below props [ mods ] BREAK_ONLY_BEFORE_DATE=null CHARSET=AUTO CHECK_METHOD=entire_md5 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+) MAX_DAYS_AGO =2000 MAX_DAYS_HENCE=365 NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom crcSalt=<SOURCE> initCrcLength = 1048576 i tried changing the CHECK_METHOD to other options but it did not work  Thanks in advance 
Hi That’s almost mission impossible with standard setup as you could do queries without defining any indexes into it. You could also use eventtypes etc. to hide real index names. If you star to inde... See more...
Hi That’s almost mission impossible with standard setup as you could do queries without defining any indexes into it. You could also use eventtypes etc. to hide real index names. If you star to index all your search logs from sh side and look litesearch part then that could give to you more accurate index list? r. Ismo
It’s still same situation for supported languages. Supported means e.g. splunklib etc. integration support. Of course you could use almost any languages to do e.g. scripted inputs.
But what about Splunk Cloud. Does it also support Perl?
Hello @CSReviews you can export as csv file it's then easy to import. https://hurricanelabs.com/splunk-tutorials/ingesting-a-csv-file-into-splunk/ "Upload with Splunk Web"    
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field ... See more...
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field issue by disabling KV_MODE. I was thinking of adding something like this to the app props.conf but I am still looking better options.     INDEXED_EXTRACTIONS =  KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\[\]\,]+\s*)([\{])