All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wanted to install Sysmon App for Splunk (App) and Microsoft Sysmon Add-on (Add-on) on my development server (Splunk 8.0.4.1).  I am running my development server on Ubuntu 18.04.4 LTS. I thought i... See more...
I wanted to install Sysmon App for Splunk (App) and Microsoft Sysmon Add-on (Add-on) on my development server (Splunk 8.0.4.1).  I am running my development server on Ubuntu 18.04.4 LTS. I thought it would be as easy as installing them both and looking at the Sysmon App for Splunk I would get no events when I submitted to see the last 24 hours. I noticed that I was getting events in Search, but none were making it to the App.  I was getting an error for field extractions that said Splunk could not perform action for resource data/props/extractions (404, 'Splunk cannot find "data/props/extractions/source::XmlWinEventLog:Microsoft-Windows-Sysmon//Operational : REPORT-sysmon". [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/TA-microsoft-sysmon/data/props/extractions/source%253A%253AXmlWinEventLog%253AMicrosoft-Windows-Sysmon%252F%252FOperational%20%3A%20REPORT-sysmon?safe_encoding=1; [{\'type\': \'ERROR\', \'code\': None, \'text\': \'Could not find object id=source%3A%3AXmlWinEventLog%3AMicrosoft-Windows-Sysmon//Operational : REPORT-sysmon\'}]') I removed both the App and the Add-on, and started again.  It looked like the App did not require the Add-on, so I only installed the app.  I could then see several thousand sysmon messages in the App (Overview), but it did not look like any of the other tabs or panels were populating.  I also noticed that I "though" an XMLWinEventLog Source had appeared (before it was just the WinEventLogs that references sysmon. I installed the Add-on, and then the app stopped displaying the sysmon messages in the overview total panel. I then removed the Add-on, and I can now see the Event Count and Event Count Over Time (in the Sysmon Overview), but none of the other tabs (Network Activity, Process Activity, etc) are populating. I have 34,000 events in the source="WinEventLog:Microsoft-Windows-Sysmon/Operational" query. I have 670 events in the source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" query over the same time period (last 24 hours). In a somewhat desperate attempt I read through the Security Essentials docs on configuring Sysmon, and they recommended deploying the Add-on to the UF (on the windows box running sysmon). I did configure and check that I was getting a LOT of events with sysmon.  I had used the information from SwiftonSecurity (https://github.com/SwiftOnSecurity/sysmon-config) to configure Sysmon on my test workstation. My ultimate goal was to send sysmon information to Security Essentials so I could use that to detect suspicious activity.  With the add-on removed there are very few fields in either the XmlEventLogs or the WinEventLogs data sources.  I would love to have a direction to move forward on getting both the app to work and security essentials to work.
I wanted to install Sysmon App for Splunk (App) and Microsoft Sysmon Add-on (Add-on) on my development server (Splunk 8.0.4.1).  I am running my development server on Ubuntu 18.04.4 LTS. I thought i... See more...
I wanted to install Sysmon App for Splunk (App) and Microsoft Sysmon Add-on (Add-on) on my development server (Splunk 8.0.4.1).  I am running my development server on Ubuntu 18.04.4 LTS. I thought it would be as easy as installing them both and looking at the Sysmon App for Splunk I would get no events when I submitted to see the last 24 hours. I noticed that I was getting events in Search, but none were making it to the App.  I was getting an error for field extractions that said Splunk could not perform action for resource data/props/extractions (404, 'Splunk cannot find "data/props/extractions/source::XmlWinEventLog:Microsoft-Windows-Sysmon//Operational : REPORT-sysmon". [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/TA-microsoft-sysmon/data/props/extractions/source%253A%253AXmlWinEventLog%253AMicrosoft-Windows-Sysmon%252F%252FOperational%20%3A%20REPORT-sysmon?safe_encoding=1; [{\'type\': \'ERROR\', \'code\': None, \'text\': \'Could not find object id=source%3A%3AXmlWinEventLog%3AMicrosoft-Windows-Sysmon//Operational : REPORT-sysmon\'}]') I removed both the App and the Add-on, and started again.  It looked like the App did not require the Add-on, so I only installed the app.  I could then see several thousand sysmon messages in the App (Overview), but it did not look like any of the other tabs or panels were populating.  I also noticed that I "though" an XMLWinEventLog Source had appeared (before it was just the WinEventLogs that references sysmon. I installed the Add-on, and then the app stopped displaying the sysmon messages in the overview total panel. I then removed the Add-on, and I can now see the Event Count and Event Count Over Time (in the Sysmon Overview), but none of the other tabs (Network Activity, Process Activity, etc) are populating. I have 34,000 events in the source="WinEventLog:Microsoft-Windows-Sysmon/Operational" query. I have 670 events in the source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" query over the same time period (last 24 hours). In a somewhat desperate attempt I read through the Security Essentials docs on configuring Sysmon, and they recommended deploying the Add-on to the UF (on the windows box running sysmon). I did configure and check that I was getting a LOT of events with sysmon.  I had used the information from SwiftonSecurity (https://github.com/SwiftOnSecurity/sysmon-config) to configure Sysmon on my test workstation. My ultimate goal was to send sysmon information to Security Essentials so I could use that to detect suspicious activity.  With the add-on removed there are very few fields in either the XmlEventLogs or the WinEventLogs data sources.  I would love to have a direction to move forw
I have list of around 100 hosts that are sending data to index and I would love to return a table with hostname and status of 0 (didn't receive any date from it in selected time range) and 1 (did rec... See more...
I have list of around 100 hosts that are sending data to index and I would love to return a table with hostname and status of 0 (didn't receive any date from it in selected time range) and 1 (did receive the data). I am able to search through multiple hosts with OR like `host=test1 OR host=test2 OR ...` but I am not sure how to display status 0 at hosts that are not found. What would be efficient solution for this please?
Hi Guys,   So basically i am trying to create a line-chart where in I am trying to plot fields status and time on it. Status can be anything like created, in-progress, repair, maintenance ,complet... See more...
Hi Guys,   So basically i am trying to create a line-chart where in I am trying to plot fields status and time on it. Status can be anything like created, in-progress, repair, maintenance ,complete..etc..etc  and whenever the status is changed I am capturing its time in time filed. All this results are group by unique ID's, so one ID can go through multiple statuses before it's complete. I am trying to capture all this details on line chart .(I am not using time-chart as some ID's may also get completed within a second and some might take hours too, so i am not able to plot the time precisely in that case). As of now, on Line-chart I am able to get the status fields on X-axis but the issue is its not coming in a proper status flow. Following are the example of current results which I am getting from my search query - |chart count by status time Status    05:32:11.711    05:32:18.896    05:33:15.531    05:35:39.722    05:36:28.321   Complete         0                              0                             0                               0                          1 Created            1                               0                             0                               0                          0 In-Progress     0                               1                             0                               0                          0 Post                   0                                0                            0                                1                         0 Repair               0                                0                             1                              0                          0 so the way it is getting plotted on the line-chart on x-axis is like - Complete.. Created.. In-Progress.. Post.. Repair which is not the right flow and it should be like Created.. In-Progress.. Repair.. Post.. Complete   So if somehow i can sort the status column values, by the timestamp in the rows like below then i guess i will be able to achieve this. Status    05:32:11.711    05:32:18.896    05:33:15.531    05:35:39.722    05:36:28.321   Created               1                              0                             0                               0                          0 In-Progress        0                              1                             0                               0                          0 Repair                  0                              0                             1                               0                          0 Post                      0                              0                              0                              1                          0 Complete           0                              0                              0                              0                         1 I have tried multiple ways but no luck . Can someone please help me figure this out . Also if there is some other way to achieve this then i am willing try that too. Thank you.
Trying to display Percentages on Timechart , but it's not working.   Base search | fields APP Usage_kb | eval Usage_gb= round(Usage_kb/1024/1024, 5) | timechart count by APP. it's not working. I ... See more...
Trying to display Percentages on Timechart , but it's not working.   Base search | fields APP Usage_kb | eval Usage_gb= round(Usage_kb/1024/1024, 5) | timechart count by APP. it's not working. I want to display timechart for Usage_gb per APP. please help me.    
We migrated almost all of our existing indexes from traditional indexes with separate warm and cold mount paths to smartstore a little under a year ago. It's all worked great, however for indexes ... See more...
We migrated almost all of our existing indexes from traditional indexes with separate warm and cold mount paths to smartstore a little under a year ago. It's all worked great, however for indexes with long term retention, buckets that were in the coldPath at the time of smartstore converstion continue to be stubbed out and localized from S3 back into the coldPath, while everything since conversion uses the warm path, as expected since that mount is the SPLUNK_DB definition used by the smartstore indexes. I want to re-map the SPLUNK_COLD path to use the same OS mount, but what is the supported way to do that with smartstore? From the documentation (https://docs.splunk.com/Documentation/Splunk/7.3.3/Indexer/Moveanindex) it sounds like you would normally manually copy the data from the old to the new path, and then re-map the variable, however with smart store does it work the same? Or is it just something like force clearing the smartstore cache on the OS mount I want to clear off, re-mapping the variable, and then new localization of buckets simple uses the re-mapped path?
aid                              SHA abc                          12345                                   12345 ujdk                         9890                                    9890 yui     ... See more...
aid                              SHA abc                          12345                                   12345 ujdk                         9890                                    9890 yui                          1239                                  1897 I would like to trigger an alert if a particular aid has different SHA. In the above case, field yui should trigger an alert for me.  can someone help me with an SPL? Thanks in advance
Hi All, I am currently getting following results from my search query -   time1                                      time2                                      duration 06/26/2020 07:42:11   06/2... See more...
Hi All, I am currently getting following results from my search query -   time1                                      time2                                      duration 06/26/2020 07:42:11   06/26/2020 07:42:55     0.73 06/26/2020 07:47:10    06/26/2020 07:55:39    8.48  06/26/2020 07:51:09   Following is the example of results which I am trying to get –   time1                                    time2                           Duration 06/26/2020 07:42:11   06/26/2020 07:42:55     0.73 06/26/2020 07:47:10         06/26/2020 07:51:09    06/26/2020 07:55:39    4.30 So basically what i want is that, "time2" field should look for the latest timestamp in "time1" field before calculating duration. Thanks in advance.
how can i read or get data from .txt file without monitoring(indexing) the file data.
Hi, I am writing a search to create 3 columns of data P,F and C based on Teams. The table which I expect is this Teams P C F team1 441 0 6 team2 4668 0 0 team3 2163 57 27 ... See more...
Hi, I am writing a search to create 3 columns of data P,F and C based on Teams. The table which I expect is this Teams P C F team1 441 0 6 team2 4668 0 0 team3 2163 57 27 and the result table which i got is Teams P C F team1 441 57 6 team2 4668   27 team3 2163     The search which i am using is index="fq" | where Status="P" | stats count as P by Teams | fillnull value=0 P | appendcols [ search index="fq" | where Status="F"  | stats count as F by Teams | fillnull value=0 F] | appendcols [ search index="fq" | where Status="C" | stats count as C by Teams | fillnull value=0 "Covered"] Used fillnull too..But it did not work Kindly help me with this.
Hello, I have to update the $SPLUNK_HOME/etc/system/local/limits.conf of clustered indexer, with new parameters. I'll have to create an app on deployment server to deploy it to indexers via mastern... See more...
Hello, I have to update the $SPLUNK_HOME/etc/system/local/limits.conf of clustered indexer, with new parameters. I'll have to create an app on deployment server to deploy it to indexers via masternode. In the app, do I need to create a limits.conf with all parameters already existing  on the indexers limits.conf + the new parameters to add, or do I have to put only the new parameters to add to the limits.conf ? Thank you for your help  
Hello. In the past few days i've been having an issue with my searches on my Splunk. I have an instance on which I collect some AWS logs and it had worked perfectly until last week when suddenly I st... See more...
Hello. In the past few days i've been having an issue with my searches on my Splunk. I have an instance on which I collect some AWS logs and it had worked perfectly until last week when suddenly I started receiving this error on the job : [indexer 1] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) [indexer 2] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) Also I see this error on my two indexers: [indexer 1] Streamed search execute failed because: Error in "litsearch" command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK.. [indexer 2] Streamed search execute failed because: Error in "litsearch" command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK.. To clarify and reduce the scope of possible solutions I'd like to add that my license is not expired and it has not been exceeded, so I do not know what could be happening. Could someone help me out? Thanks in advance.    
Hi everyone, I am trying to parse a json data to a plain text formatted to use it in an email alert. Here is my JSON: { "DbMaintenanceDailyRoutineSummary": { "success": [ { ... See more...
Hi everyone, I am trying to parse a json data to a plain text formatted to use it in an email alert. Here is my JSON: { "DbMaintenanceDailyRoutineSummary": { "success": [ { "server-002": [ { "vacuum": true, "analyze": true, "warehouse": "mydatabase@aaaaaa" }, { "vacuum": true, "analyze": true, "warehouse": "mydatabase@bbbbbb" } ] }, { "server-003": [ { "vacuum": true, "analyze": true, "warehouse": "mydatabase@ccccccc" }, { "vacuum": true, "analyze": true, "warehouse": "mydatabase@ddddddd" } ] } ], "fail": [ { "server-002": [ { "vacuum": true, "analyze": false, "warehouse": "mydatabase@eeeeee" } ] }, { "server-003": [ { "vacuum": false, "analyze": true, "warehouse": "mydatabase@fffffff" }, { "vacuum": true, "analyze": false, "warehouse": "mydatabase@gggggg" }, { "vacuum": true, "analyze": false, "warehouse": "mydatabase@hhhhhh" } ] } ] } } Can I convert that JSON in something like the following:   DbMaintenanceDailyRoutineSummary fail: server002: mydatabase@eeeeee:  analyze: false, vacuum: true server003: mydatabase@fffffff - analyze: false, vacuum: true mydatabase@ggggg - analyze: false, vacuum: true   success: server002: mydatabase@aaaaaa- analyze: true, vacuum: true mydatabase@bbbbbb - analyze: true, vacuum: true server003: mydatabase@ccccccc  - analyze: false, vacuum: true mydatabase@dddddd - analyze: false, vacuum: true   Any help appreciated.
Greetings, I need to disable TLSv1.1 from our setup. I was able to follow the documentation about doing so for the Controller server. However, there seem to be no reference for the reporting service... See more...
Greetings, I need to disable TLSv1.1 from our setup. I was able to follow the documentation about doing so for the Controller server. However, there seem to be no reference for the reporting service.  I need your help to guide me on how to disable it.  FYI: both services are installed on the same host. So, if HTTPS is not needed to communicate with the Controller, I can go ahead and disable it from the reporting service. 
Hi all, I am trying to use OSSEC archives.log to collect logs of different systems. It can collect whatever you need from windows and Linux systems and gather them inside the archives.log file as a ... See more...
Hi all, I am trying to use OSSEC archives.log to collect logs of different systems. It can collect whatever you need from windows and Linux systems and gather them inside the archives.log file as a raw log for all. Then, I need to parse the file and assign correct sourcetypes and source and host variables to them. I tried using props.conf and transforms.conf to do this using available transformations. I have succeeded getting for example windows events a WinEventLog sourcetype using that method and it works correctly on assigning the sourcetype and trimming the event body  from the original log file. However, the fields are not correctly extracted from that Windows Log. Sample archives.log of two windows and linux events are as follows:     2020 Jun 16 00:01:04 (E-Fl) 192.168.3.2->WinEvtLog 2020 Jun 16 00:01:00 WinEvtLog: Security: AUDIT_SUCCESS(4672): Microsoft-Windows-Security-Auditing: (no user): no domain: eFl: Special privileges assigned to new logon. Subject: Security ID: S-1-5-21-3960285484-3209917605-2958509563-1006 Account Name: t_apx Account Domain: EFL Logon ID: 0x133a050c7 Privileges: SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege SeDelegateSessionUserImpersonatePrivilege 2020 Jun 16 00:01:06 (SE-Cloud) 192.168.9.194->/var/log/messages Jun 16 00:01:05 ccrtl13c snmpd[1204]: Connection from UDP: [192.168.9.202]:50515->[192.168.9.194]:161 2020 Jun 16 00:01:08 (FTP) 192.168.9.230->WinEvtLog 2020 Jun 16 00:01:05 WinEvtLog: System: WARNING(51): Disk: (no user): no domain: FTPPublic.serv.local: An error was detected on device \Device\Harddisk5\DR5 during a paging operation.     my props.conf     [ossec_archives] TRANSFORMS-assignSourcetype = extractEvent, assignWinEvtLog #,assignSyslog     my transforms.conf     ###### OSSEC_Archives ###### [extractEvent] SOURCE_KEY = _raw REGEX = WinEvtLog\s(.*)$ FORMAT = $1 DEST_KEY = _raw #CLONE_SOURCETYPE = WinEventLog [assignWinEvtLog] #CLONE_SOURCETYPE = WinEventLog REGEX = WinEvtLog: DEST_KEY =MetaData:Sourcetype FORMAT =sourcetype::WinEventLog #[assignSyslog] #REGEX = \s[WinEvtLog:].*$ #DEST_KEY =MetaData:Sourcetype #FORMAT =sourcetype::syslog       Can you please help me get the data in correctly and make default windows and linux add-ons extract the related fileds?   Thanks
Hi All, our environment topology is UF>HF>Indexer. UF is installed on server with different location timings so i want to find the time difference between UF time of server and HF timing. How to d... See more...
Hi All, our environment topology is UF>HF>Indexer. UF is installed on server with different location timings so i want to find the time difference between UF time of server and HF timing. How to do it?  
I'm configured an Alert to send mail whenever Error type is triggered in windows event log. I need to customize subject for that alert like server name, event type, event ID in mail. For Example: Al... See more...
I'm configured an Alert to send mail whenever Error type is triggered in windows event log. I need to customize subject for that alert like server name, event type, event ID in mail. For Example: Alert triggered for 'ServerName' with 'Event Type' and 'Event ID' Below is the search im using. index="wineventlog" source="wineventlog:application" SourceName="MSSQLSERVER" Type=Error [|inputlookup inv where client_group="*SQLServer Admin*" |fields name |rename name as host] | table _time,host,EventID,Type,Message | sort _time desc
I'm looking for a splunk query for any suspicious IP address associated with an IP range that was already blocked in the top ten. Thank you,
One of the system admins renewed the certificate and thought they needed to delete the splunk.key file in the mongo folder..  Looking at another server it looks like a password not a true key, how d... See more...
One of the system admins renewed the certificate and thought they needed to delete the splunk.key file in the mongo folder..  Looking at another server it looks like a password not a true key, how do I regenerate it... We don't have a backup either as they took to long to own up.  Cheers
Hi at all, I have very long events (more than 10,000 chars) that I have to send via syslog (udp) to a third party system. I'm working on an Heavy Forwarder with Splunk 8.0.3 running on Linux Red ... See more...
Hi at all, I have very long events (more than 10,000 chars) that I have to send via syslog (udp) to a third party system. I'm working on an Heavy Forwarder with Splunk 8.0.3 running on Linux Red Hat. Events are truncated at 1024. I know that there the parameter maxEventSize to put in outputs.conf but it doesn't run in my situation (I inserted a very greater number in maxEventSize). Had anyone the same problem? Thanks in advance. Ciao. Giuseppe