All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have been trying to login but got the 500 error. I changed the port and that worked for a while, but now that too will present the correct log in window then go straight to 500 error page as soon a... See more...
I have been trying to login but got the 500 error. I changed the port and that worked for a while, but now that too will present the correct log in window then go straight to 500 error page as soon as I enter my deets. Any ideas?  
Hi All. I need help with Splunk Query for below scenario: I need to show the status of my cronjob in below format. Starttime FinishTIme CurrentStatus Time when Job Starts Time whe... See more...
Hi All. I need help with Splunk Query for below scenario: I need to show the status of my cronjob in below format. Starttime FinishTIme CurrentStatus Time when Job Starts Time when Job Finishes Started/Running/Finished   Start:- INFO | jvm 1 | main | 2020/07/16 03:30:08.407 xxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Found 13 set of files to process End:- INFO | jvm 1 | main | 2020/07/16 04:21:57.914 xxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Ended . RESULT :true In between there are many lines for which status should be running.. Thanks in dvance
I'm trying to extract the "Flash Date" and use it a the time stamp  when I index my csv file. I'm getting random results. Any help would be greatly appreciated. In some cases the event would grab the... See more...
I'm trying to extract the "Flash Date" and use it a the time stamp  when I index my csv file. I'm getting random results. Any help would be greatly appreciated. In some cases the event would grab the "start time",in others it would match up to "End Time"  Question 1) if the field name contains a space, do i need to encase it in double quotes when specifying TIMESTAMP_FIELDS ? 2) Can I use just a date with no time as seen in the values from "Flash date"?   My CSV file Folder,Job Name,Flash Date,Job Status,Start Time,End Time S1,J1,"July 19, 2020",Ended OK,"July 19, 2020 3:00:121 PM","July 19, 2020 3:00:23" PM S1,J2,"July 1, 2020",Failed,"July 2, 2020 3:00:21 PM","July 9, 2020 5:00:00 PM" S1,J3,"July 4, 2020",Failed,"","" S1,J3,"July 4, 2020",Ended OK,"July 4, 2020 12:00:00 PM",""         [my_csv] CHARSET = UTF-8 INDEXED_EXTRACTIONS = csv DATETIME_CONFIG = NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = Flash Date TIME_FORMAT = %B %d, %Y        
I'm looking signatures in snort but I want to exclude some of the signature IDs by using inputlookup, but it doesn't seem to exclude them. My search index=security-snort sourcetype="snort" | sear... See more...
I'm looking signatures in snort but I want to exclude some of the signature IDs by using inputlookup, but it doesn't seem to exclude them. My search index=security-snort sourcetype="snort" | search NOT [ | inputlookup SnortSigEx.csv ] | stats values(name) as name by signature | dedup signature  
Hello, I have an application that logs a "appsvr disconnected" and a "appsvr connected" message in the app log.  I have created the extraction called connectionstatus to indicate as such.  I would ... See more...
Hello, I have an application that logs a "appsvr disconnected" and a "appsvr connected" message in the app log.  I have created the extraction called connectionstatus to indicate as such.  I would like to create a Splunk alert to notify me only when the connectionstatus = "appsvr disconnected" appears in the log and it is not followed by a connectionstatus = "appsvr connected" in a 30 second window.  I am trying to cut down on the false positives, it seems fairly straight forward but I haven't been able come up with a search that would satisfy this condition.  Any help is greatly appreciated.  Thanks! Conditions: connectionstatus="appsvr disconnected" appears in the log by itself for 30 seconds or more - Alert.  connectionstatus="appsvr disconnected" appears in the log but the connectionstatus="appsvr connected" also appears in a 30 second window - False positive, don't alert. -Magnus
I copied the default inputs.conf to local and added some monitor configurations. There are seven monitors setup but only six are reporting. /var/log/messages is not working. [monitor:///var/log/mes... See more...
I copied the default inputs.conf to local and added some monitor configurations. There are seven monitors setup but only six are reporting. /var/log/messages is not working. [monitor:///var/log/messages] disabled = false #index = linuxlog sourcetype = syslog [monitor:///var/log/secure] disabled = 0 sourcetype = linux_secure The logs from secure do show up and the other monitors are working as expected. I have tried with ``disabled = 0``, also with and without index and sourcetype.  All the examples I am finding indicate that this should be working. 
I have a report which runs every week on Monday , I'm using earliest and latest time in my search .  Now I wanted to add a new field to my search called lastdate say if a report period is between 07/... See more...
I have a report which runs every week on Monday , I'm using earliest and latest time in my search .  Now I wanted to add a new field to my search called lastdate say if a report period is between 07/01 to 07/07 the lastdate field should display 07/07 and For my monthly report how do I create new field called MonthEnd and this  should displays the values as June 30 for month ending date, Please help    
Searches are failing ion Core SH cluster * splunkd.log * Unable to distribute to peer named myindexer00 at uri https://10.1.110.1:8089 because replication was unsuccessful. ReplicationStatus: Faile... See more...
Searches are failing ion Core SH cluster * splunkd.log * Unable to distribute to peer named myindexer00 at uri https://10.1.110.1:8089 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_SIZE_EXCEEDS_MAX_SIZE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. $ splnks@splk07 /opt/splunk/bin $ ./splunk cmd btool distsearch list --debug | grep maxBundleSize /opt/splunk/etc/apps/hawkEye/default/distsearch.conf maxBundleSize = 3096 $splnks@splk07 /opt/splunk/var/run $ ls -alrth drwx------. 12 splunk splunk 4.0K Jul 22 14:34 searchpeers -rw-------. 1 splunk splunk 46 Jul 22 16:01 2A59DFC9-2832-4ABD-B8E1-BC991A6379D0-1595448064.bundle.info -rw-------. 1 splunk splunk 3.0G Jul 22 16:01 2A59DFC9-2832-4ABD-B8E1-BC991A6379D0-1595448064.bundle -rw-------. 1 splunk splunk 46 Jul 22 16:12 2A59DFC9-2832-4ABD-B8E1-BC991A6379D0-1595448738.bundle.info -rw-------. 1 splunk splunk 2.9G Jul 22 16:12 2A59DFC9-2832-4ABD-B8E1-BC991A6379D0-1595448738.bundle  
Hello Team, Whenever i use the rename command to rename the _time field than output comes in the binary fomart. For Eg. :- _time is 2020/07/21 than i rename to Time 1592830387
The workspace ID, appID, subscription, and resource groups are correct 2020-07-24 18:14:47,564 ERROR pid=7688 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="log_app_common_01_A... See more...
The workspace ID, appID, subscription, and resource groups are correct 2020-07-24 18:14:47,564 ERROR pid=7688 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="log_app_common_01_AzureActivity_001" status="404" step="Post Query" response="{"error":{"message":"The requested path does not exist","code":"PathNotFoundError"}}" the Log Analytics Query is simply AzureActivity Is this a network issue maybe?  Has anyone seen this?     
Hello guys, does maxTotalDataSizeMB parameter in indexes.conf will still apply if we use volume for coldPath (and homePath by the way)? maxTotalDataSizeMB default value is 500000. homePath = vo... See more...
Hello guys, does maxTotalDataSizeMB parameter in indexes.conf will still apply if we use volume for coldPath (and homePath by the way)? maxTotalDataSizeMB default value is 500000. homePath = volume:volume-loadbalancer-hot/MYINDEX homePath.maxDataSizeMB = 400000 coldPath = volume:volume-loadbalancer-cold/MYINDEX coldToFrozenDir = /VAR/siem/frozen/MYINDEX Thanks.
I am using universal forwarder. Created app named - cisco-ios. Then inputs.conf , props.conf & transforms.conf inside the app. In inputs.conf i am monitoring syslogs. I can able to receive syslog i... See more...
I am using universal forwarder. Created app named - cisco-ios. Then inputs.conf , props.conf & transforms.conf inside the app. In inputs.conf i am monitoring syslogs. I can able to receive syslog in console. I have too much logs. I want to filter logs containg "Domain" keyword inside raw logs. Props.conf [source::.../syslog/cisco/ios/cisco-ios.log(.\d+)?] TRANSFORMS-null= setnull Transforms.conf [setnull] REGEX = \[^%.*:$\] DEST_KEY = queue FORMAT = nullQueue But i am unable to filter logs containing keyword "Domain". Please help me here.
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. I'm trying to show a time... See more...
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. I'm trying to show a time-chart that shows the average count per month of reports that have 'Closed' and'Open'. status , along with the difference of the two (everyday). I have found how to do the difference between'Open' and 'Closed'. But I cannot figure out how to find the monthly average for 'Open' and 'Closed', does anyone have any ideas how to query this?      index=blah... | timechart span=1mon count(report_date) by status | eval difference=abs(OPEN - CLOSED)       This is where I'm stuck, how do I get the monthly average of the particular status fields?
I'm trying to set up the GSuiteForSplunk Input Add-On on Google Cloud - I've followed the instructions (I think) and created an OAuth2 token and set that up in the app but I'm getting 100% errors acc... See more...
I'm trying to set up the GSuiteForSplunk Input Add-On on Google Cloud - I've followed the instructions (I think) and created an OAuth2 token and set that up in the app but I'm getting 100% errors accessing the Admin SDK API. I think the problem could be the account I'm using to authorise the token (I'm using my normal Google account but it feels like I should be using the Service Account?) but in the main, I've no idea what's wrong. Please help Thanks
when i tried to configure splunk dbconnect app i am seeing below error     2020-07-24 11:18:06.745 -0400 [dw-55 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMa... See more...
when i tried to configure splunk dbconnect app i am seeing below error     2020-07-24 11:18:06.745 -0400 [dw-55 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: af5d8ac47fd41568 java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
Hi, I have installed Splunk Enterprise 7.1.2 and SA-Eventgen 6.5.3. I have not installed an extra Python version. Splunk is using its own python 2.7 engine. I use Windows 2012 R2 server operation s... See more...
Hi, I have installed Splunk Enterprise 7.1.2 and SA-Eventgen 6.5.3. I have not installed an extra Python version. Splunk is using its own python 2.7 engine. I use Windows 2012 R2 server operation system. I have set the environment variables: PYTHONPATH=C:\splunk\bin;C:\Splunk\Python-2.7\Lib;C:\Splunk\Python-2.7\Lib\site-packages;C:\Splunk\Python-2.7\Lib\site-packages\splunk;C:\Splunk\Python-2.7\Scripts SPLUNK_HOME=C:\splunk SPLUNKPATH=C:\splunk After activating of Eventgen in Data Inputs no events are generated. I have looked into the splunkd.log file and found a huge list of the following error messages: 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Loading module 'output.awss3' from 'awss3.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Searching for plugin in file 'C:\\Splunk\\etc\\apps\\SA-Eventgen\\lib\\splunk_eventgen\\lib\\plugins\\output\\counter.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Loading module 'output.counter' from 'counter.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Searching for plugin in file 'C:\\Splunk\\etc\\apps\\SA-Eventgen\\lib\\splunk_eventgen\\lib\\plugins\\output\\devnull.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Loading module 'output.devnull' from 'devnull.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Searching for plugin in file 'C:\\Splunk\\etc\\apps\\SA-Eventgen\\lib\\splunk_eventgen\\lib\\plugins\\output\\file.py'"} 07-24-2020 16:51:39.450 +0200 ERROR ExecProcessor - message from "python C:\Splunk\etc\apps\SA-Eventgen\bin\modinput_eventgen.py" 2020-07-24 16:51:39 eventgen DEBUG MainProcess {'event': "Loading module 'output.file' from 'file.py'"} Can anyone give me an idea of what I have done wrong? I have looked into a tutorial video a YouTube and have done the same settings but I still get the errors. Best regards, Enrico
Hi All, We notice a seemingly weird behaviour where modifying the notable severity in a correlation search brings up historic events to the "Incident Review" pane with the new severity.  We use Ent... See more...
Hi All, We notice a seemingly weird behaviour where modifying the notable severity in a correlation search brings up historic events to the "Incident Review" pane with the new severity.  We use Enterprise Security ver. 5.3.0 To explain further with a hypothetical scenario Let's say A use case like "password violation on a critical asset" with a notable of informational severity fire 5 times a day on an average. This noon, I change severity from informational to high Navigating to the incident review and choosing a time period to 30 days(for instance) brings back 30 days worth of notables for this use case but with a high! severity (which is not right) Ideally we expect the events to be split across two severities namely information until noon today and high for any events after Anyone faced this issue? Is this by design? Is there a solution to this?
I have been able to find searches for roles mapped to AD Groups, but I need to get the indexes those roles are allowed to use.  I'm creating a search to assist in upgrading to a new system.  This map... See more...
I have been able to find searches for roles mapped to AD Groups, but I need to get the indexes those roles are allowed to use.  I'm creating a search to assist in upgrading to a new system.  This mapping will give me the relationships.  I have used:  | rest /services/admin/roles | table title, srchIndexesAllowed | rename title as role, but it only gives me the role and searchIndexes.  I need to map it to AD groups too.  Also, when I went to download that searches content, it did not download all the data. I also used: | rest /services/admin/roles | table title, srchIndexesAllowed | rename title as role, but what I really need is all three values: Role mapped to AD Group and role mapped to indexes in the same search.      
Hi all I’m new to Splunk so forgive my ignorance.  We’re currently using Splunk as a SIEM and I’m having trouble getting a search query put together. What I’m looking for is: Show me all unique sou... See more...
Hi all I’m new to Splunk so forgive my ignorance.  We’re currently using Splunk as a SIEM and I’m having trouble getting a search query put together. What I’m looking for is: Show me all unique sources that have initiated ICMP traffic.  I believe that I’m getting the correct results with the following search string. index="indexabc" dest_port=0 | stats count by src_ip The additional information I’m looking for is “by unique source IP (above) find those that have 10+ unique destination IPs (dest_ip).”
We have Chef in our environment, as well as a Deployment Server. Is there any guide out there on how to deploy UF’s across hundreds of Linux and Windows servers?