All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am using earliest and latest in sub search to get last 24 hrs data and compare it with last 7 days data to know the changes happened, using time range picker as last 7 days then which time ran... See more...
Hi, I am using earliest and latest in sub search to get last 24 hrs data and compare it with last 7 days data to know the changes happened, using time range picker as last 7 days then which time range my outer search will consider. Kindly assist
Hello All, We are currently testing Splunk with the intentions of having it collect our Security logs and other logs from domain controllers. Early on, we ran into an issue where user ids and gro... See more...
Hello All, We are currently testing Splunk with the intentions of having it collect our Security logs and other logs from domain controllers. Early on, we ran into an issue where user ids and group guids were being translated after getting ingested into Splunk.  A quick google search revealed a simple switch to a configuration item in a stanza, that would no longer translate the account guids.  While it's nice that the guids can be resolved, we want a one to one match of what is collected from the event log to be what is put into Splunk. There is a security event id 4625 that we collect.  In Splunk, there is a field called "Group Domain" field.  Some 4625 events appear as expected (correct group, correct domain etc), but others will show the Group Domain value as the name of the client computer that was generating the security event on the Domain Controller.  Incidentally, this same value appears for the "Source Workstation" field. We are trying to figure out why Splunk is populating the Group Domain field with the name of the workstation generating the security event, and if there is a way to tell Splunk to ignore trying to populate this data field, as it doesn't necessarily apply.  If you look at the XML of the event, no such field exists. Any help, guidance, etc. would be greatly appreciated. Regards, Blake
Hi, I'm setting up Splunk Universal Forwarder to watch logs generated from an application I have in AWS Elastic Beanstalk. This is done by running shell script installing the Universal Forwarder and ... See more...
Hi, I'm setting up Splunk Universal Forwarder to watch logs generated from an application I have in AWS Elastic Beanstalk. This is done by running shell script installing the Universal Forwarder and setting up monitors.   Simple enough. The problem is my application logs into a rolling file, meaning after a certain amount of data has been entered into the file (10MB in this example) it then creates a new file in the same location named "example 1.log" then "example 2.log", etc.   Currently I've tried using the below command to set up all the monitors with no success: /opt/splunkforwarder/bin/splunk add monitor "/var/logs/example*"   How can I capture all the files it will create?
Hi Hope you are well, I want to use splunk-agent-java and read description of this page https://github.com/splunk/splunk-agent-java 1-this link not work http://splunk-base.splunk.com/apps/25505/... See more...
Hi Hope you are well, I want to use splunk-agent-java and read description of this page https://github.com/splunk/splunk-agent-java 1-this link not work http://splunk-base.splunk.com/apps/25505/splunk-for-jmx 2-I download this file splunkagent.tar.gz and extact to this path /opt/splunkagent.jar on one of my server that splunk forwarder already installed on it. 3-here is my splunkagent.properties agent.app.name=sokantest agent.app.instance=MyJVM agent.userEventTags=key1=value1,key2=value2 splunk.transport.impl=com.splunk.javaagent.transport.SplunkTCPTransport splunk.transport.tcp.host=192.168.1.1 splunk.transport.tcp.port=9997 splunk.transport.tcp.maxQueueSize=5MB splunk.transport.tcp.dropEventsOnQueueFull=false trace.blacklist=com/sun,sun/,java/,javax/,com/splunk/javaagent/ trace.methodEntered=true trace.methodExited=true trace.classLoaded=true trace.errors=true trace.hprof=false trace.hprof.tempfile=mydump.hprof trace.hprof.frequency=600 trace.jmx=false trace.jmx.configfiles=jmx trace.jmx.default.frequency=60 4-should I do something on my server side? I can't find any index or sourcetype! 5-also read this https://www.slideshare.net/damiendallimore/splunk-java-agent Any idea?  @Damien_Dallimor   Thanks  
Splunk Query index="abc" source=def [| inputlookup ABC.csv | table text_strings count | rename text_strings as search] Problem:  I need to count the text_string values but when I run the above ... See more...
Splunk Query index="abc" source=def [| inputlookup ABC.csv | table text_strings count | rename text_strings as search] Problem:  I need to count the text_string values but when I run the above search which searches the text_strings but I dont find a field called search with which I can count  So need help @somesoni2 if you can help please
Hi ,  I have a transforms to send logs from prod hosts to one index and from non prod to other.  Transforms: [prod] DEST_KEY = MetaData:Index REGEX = (.*-prd.*) FORMAT = index_a [nonprod] DES... See more...
Hi ,  I have a transforms to send logs from prod hosts to one index and from non prod to other.  Transforms: [prod] DEST_KEY = MetaData:Index REGEX = (.*-prd.*) FORMAT = index_a [nonprod] DEST_KEY = MetaData:Index REGEX = (.*-nprd.*) FORMAT = index_b   Above transforms is working fine for all logs from those hosts. But now the problem is I only want it to be applicable to //var/log/messages and //var/log/secure.   any suggestions if I can multiple regex conditions based on host I.e. prd and source path ?   appreciate your help on this
I am using the following query and trying to display the results using stats but count by field values search query |  | table A B C D E | stats count values(A) as errors values(B)  values(C)  b... See more...
I am using the following query and trying to display the results using stats but count by field values search query |  | table A B C D E | stats count values(A) as errors values(B)  values(C)  by E Also tried  | stats  count by E A B C [but this messes up everything as this requires every field to have values] Current Output  E                                  count                  A.            B                   C     Value1.                     10.                        X              YY               ZZZ                                                                    Y               ZZ              BBB Output  E                                  count                  A.            B                   C     Value1.                       8.                        X              YY               ZZZ                                        2                          Y               ZZ              BBB   @somesoni2 
Hi, How to ingest MCAS Salesforce logs into splunk.
Hi there, Today my Controller got stuck with high CPU usage, i restarted it and now for some weird reason no one can login i can only acess the /controller/admin.jsp Inside the server log i got a l... See more...
Hi there, Today my Controller got stuck with high CPU usage, i restarted it and now for some weird reason no one can login i can only acess the /controller/admin.jsp Inside the server log i got a lot of entries like this: [#|2021-12-06T22:39:37.411-0300|INFO|glassfish 4.1|javax.enterprise.system.core.security|_ThreadID=69;_ThreadName=http-listener-1(13);_TimeMillis=1638841177411;_LevelValue=800;_MessageID=NCLS-SECURITY-05046;|Audit: Authentication refused for [singularity-agent@customer1].|#] [#|2021-12-06T22:39:37.412-0300|INFO|glassfish 4.1|javax.enterprise.system.core.security.com.sun.enterprise.security.jmac.callback|_ThreadID=69;_ThreadName=http-listener-1(13);_TimeMillis=1638841177412;_LevelValue=800;|jmac.loginfail|#] I even tried to create a new Account but now when i log in everything is blank.. in the controller did i break something? Is it possible to recover what i "destroyed" Any help is appreciated.
Hey everyone If an event is added to a case as evidence, it's simple to retrieve it while looking at the case: Sources -> Cases -> Click on Case -> Evidence and look at Associated Events But this ... See more...
Hey everyone If an event is added to a case as evidence, it's simple to retrieve it while looking at the case: Sources -> Cases -> Click on Case -> Evidence and look at Associated Events But this is only useful if the events were added as evidence. If they were not added as evidence, then is there a way of listing them through a case? Thanks.
Hello, I am getting following warring message when I was trying to extract fields from SPLUNK UI (web Console). I could extract the fields, but my extracted fields are not showing up in my search/qu... See more...
Hello, I am getting following warring message when I was trying to extract fields from SPLUNK UI (web Console). I could extract the fields, but my extracted fields are not showing up in my search/queries. But I can see the list of the extraction (or extracted fields list) under the "Setting-Fields-Field extractions" in SPLUNK UI (Web Console.   What does this warning message means and why my fields are noy showing up in my search/queries. Any help will be highly appreciated. Thank you so much.   Warning Message:    
When using the Expand your search feature, the Expanded Search String output is stripped of any custom formatting, particularly newlines. When expanding a search, the macro should be expanded  and... See more...
When using the Expand your search feature, the Expanded Search String output is stripped of any custom formatting, particularly newlines. When expanding a search, the macro should be expanded  and inserted verbatim instead and the formatting should be retained in the Expanded Search String pane.
I have a date column that I'm trying to convert to %m/%d/%Y. The date stamp is a little complex but I got it to work until daylight savings took affect. Now anything with a timezone offset that has a... See more...
I have a date column that I'm trying to convert to %m/%d/%Y. The date stamp is a little complex but I got it to work until daylight savings took affect. Now anything with a timezone offset that has a non-zero number in the third digit, -0480 for example, returns blank. Below is my query... | inputlookup DateStampConvert.csv | rename "System Name" as systemName | rename "Date Stamp" as DateStampDate | eval dateStamp=strftime(strptime(DateStampDate, "%b %d %Y %H:%M:%S %z"), "%m/%d/%Y") | table systemName dateStamp | outputlookup dateStamp.csv Is there something I'm missing?
Hey Splunk Gurus- I'm attempting to calculate the duration between when an event was first identified (which is an entry in the event "alert.created_at") and the "_time" timestamp. I'm able to calc... See more...
Hey Splunk Gurus- I'm attempting to calculate the duration between when an event was first identified (which is an entry in the event "alert.created_at") and the "_time" timestamp. I'm able to calculate this timestamp difference using strptime("alert.created_at") but the conversion of that time to epoch is relative to the viewers timezone.  The duration changes based on how you configure the Splunk UI timezone. The "_time" field is set to "current" in props.conf Here's my current search:   index=* alert.tool.name=* action="fixed" | eval create_time=strptime('alert.created_at', "%Y-%m-%dT%H:%M:%SZ") | eval duration = _time - create_time     Here's a sample of the log:   { "action": "fixed", "alert": { "number": 2, "created_at": "2021-11-22T23:49:19Z" } }     When I execute this search while my UI preferences are set to "GMT" the result is 1183959 which is the correct duration.  When I set that preference to "PST", the result is 1155159.  That number is wrong by exactly 8 hours. Any suggestions on how to deal with this?  I'm fine with either a search-time solution or a config change in props.conf if that's best. Thanks!  
I have a Linux server with splunk enterprise 6.5. However, my team manager want me to upgrade the splunk from 6.5 to 8.2. I couldn't find old releases from splunk download page. How can I upgrade it... See more...
I have a Linux server with splunk enterprise 6.5. However, my team manager want me to upgrade the splunk from 6.5 to 8.2. I couldn't find old releases from splunk download page. How can I upgrade it?  
Hi all, I would like to know if there is a way to group multiple values from repeated fields that are coming in the same log, for example, taking into account the following log event containing the ... See more...
Hi all, I would like to know if there is a way to group multiple values from repeated fields that are coming in the same log, for example, taking into account the following log event containing the following data: Log1: moduleName="Module A" moduleType="TypeA" moduleName="Module B" moduleType="TypeB" Log2: moduleName="Module C" moduleType="TypeC" moduleName="Module A" moduleType="TypeA" I tried something like: app_search_criteria | stats count by moduleName | sort -count But this way it's only bringing data for the first moduleName field it finds in one log and not for all of them, for example, I'm getting the following table: moduleName         count ModuleA                     1 ModuleC                     1 The ideal approach would be: moduleName         moduleType       count ModuleA                      TypeA                   2 ModuleB                      TypeB                   1 ModuleC                      TypeC                   1 Thanks in advance!
I have user A that is getting 3 different roles. Normally this isn't an issue, but one of those roles has a restricted search in it that will only show 4 servers in the main index. 2 of the 3 roles ... See more...
I have user A that is getting 3 different roles. Normally this isn't an issue, but one of those roles has a restricted search in it that will only show 4 servers in the main index. 2 of the 3 roles just grants access to specific indexes. The 3rd role grants access to the main index and has the following restriction: (host::serverA OR host::serverB OR host::serverC OR host::serverD)  The issue that I am having is that restriction is carrying over to the other roles.  How would I set this up that only those 4 servers are looked for in main without having those restrictions carry over to the other roles.
Hi All, How to search the internal logs of the remote agent (UF) node via Splunk Portal ?  I am trying to troubleshoot why the logs are not ingested into Splunk from the remote agent node, I did si... See more...
Hi All, How to search the internal logs of the remote agent (UF) node via Splunk Portal ?  I am trying to troubleshoot why the logs are not ingested into Splunk from the remote agent node, I did simple search query from the search head console. index="_internal"  sourcetype="splunkd.log"  host="test1"  but unable to get any result, so please do let me know how to search the internal log details from the search head portal. When I log into the UF server I can see the following information  Error | Warn | Info details  from the splunkd.log  but my intension is to check the same from the Splunk console . Kindly guide me on the same.  
I'm having more strange situations with my UF ingesting many big files. OK, I managed to make the UF read the current Exchange logs reasonably quickly (it seems that there were some age limits left ... See more...
I'm having more strange situations with my UF ingesting many big files. OK, I managed to make the UF read the current Exchange logs reasonably quickly (it seems that there were some age limits left ridiculously high by someone so there were many files to check). So now there are several dozens (or even hundreds) files tracked by splunkd but it seems to work somehow. The problem is that I also monitor another quite quickly growing file on this UF. And it's giving me headache. Some time after the UF starts, if restarted mid-day, I get TailReader - Enqueuing a very large file=\\<redacted> in the batch reader, with bytes_to_read=9565503150, reading of other large files could be delayed OK, that's understandable - the batch reader is supposed to be more effective at reading a single big file at once, why not. But the trick is - the file is not getting ingested. I don't see any new events in the index. And I checked with procexp64.exe from SysInternals and handle64.exe - the file is not open by splunkd.exe at all. So where is my file??? Other files are being monitored and the data is getting ingested.
Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage... See more...
Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage" splunk_server=* earliest=-2month@d | eval Date=strftime(_time, "%Y/%m/%d") | eventstats sum(b) as volume by idx, Date | eval MB=round(volume/1024/1024,5) | timechart first(MB) AS volume by idx