All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv wi... See more...
file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv with 2, generating a percentage of 50%.
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.Sun... See more...
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" I got stuck with this issue, please help me out.   Thanks
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue? ... See more...
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue?   Thanks
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the fil... See more...
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the files. The zipped files are unzipped to the Program FIles\var\run\splunk\upload folder where they will stay until ingested.       The problem starts when I interrupt the Splunk instance, either by restarting the server or restarting the instance manually using the cmd prompt i.e splunk restart. After restarting the service, the files don't seem to resume ingestion anymore.      How do I make sure that the UF resumes where it left off? Thanks.
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won'... See more...
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won't extract on Splunk. I've tried making little modifications but to no avail. Please help. Expression: source="ActiveDirectory" AND "CN=Domain Admins" AND member=* | rex field=_raw"(?<=CN=)[\w .]*(?=,)(?<admin>)/g"   The logs look similar to this: CN=Admin Account,OU=Vendor Accounts,OU=IT,DC=domain,DC=domain
Hi, Is it possible to monitor F5 load balancer ssl certificates using Splunk?   Thanks.
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk becaus... See more...
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk because our goal is to centralize logs and transition into Splunk as our SEM.  We want our firewall, update manager, anti malware, etc. to all have Logs in a centralized place. Will Splunk enterprise/cloud be able to centralize logs? If so, which of Splunk cloud or Splunk Enterprise would be better for the use case (SEM) I am after? Thanks!
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have... See more...
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have a reduction in our Mean Time To Detect (MTTD)?    2. How long is the maintenance window expected to be? This will help us determine the impact of the expected degradation over the course of the maintenance window. 3. Do we expect any impacts on the apps for the client 4. Is there any pre testing available for this upgrade.? 5.Is there any sort of back out plan?   please help me with these questions, I’m trying hard to get these Answers from docs but not getting any solutions.
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Co... See more...
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Com without modifying any searches. I can't delete and reindex, so that's right out. I found this post , which seems to be more or less what I want to do, but it isn't working, and I'm not sure why. It's older, so maybe the settings need to be different.  What is the easiest way to accomplish this goal? Cheers.
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at t... See more...
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at this point. I'm then using the universal forwarder to send this over to Splunk. The log file at the moment is around 150gb and growing. We've got plenty of space but I was wondering, is there a better way I should be approaching this? For example, should I break the logs up so that each Firewall has it's own directory and new sub directories per day?   Any insight would be appreciated.  Thanks, Will
index="*dockerlogs*" source="*gps-request-processor-dev*" OR source="*gps-external-processor-dev*" OR source="*gps-artifact-processor-dev*" | eval LabelType=coalesce(labelType, documentType) | stat... See more...
index="*dockerlogs*" source="*gps-request-processor-dev*" OR source="*gps-external-processor-dev*" OR source="*gps-artifact-processor-dev*" | eval LabelType=coalesce(labelType, documentType) | stats count(eval(status="Received" AND source like "%gps-request-processor%" )) as received count(eval(status="Failed")) as failed by LabelType LabelType               Received            Failed ----------                      --------                 ------ CARRIERLABEL       2                          2 NIKE                            39                        35 TASKSTART             1                           0 i want to transform above result into below table 1) where category can be 'external' or 'internal'       if labeltype is 'CARRIERLABEL' then category is 'external' else for other labeltype it should be 'internal' 2)  successcount = Received - failed category               successcount --------                    ------------- external                0 internal                 5
How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Gu... See more...
How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />   <EventID>4625</EventID>   <Version>0</Version>   <Level>0</Level>   <Task>12544</Task>   <Opcode>0</Opcode>   <Keywords>0x8010000000000000</Keywords>   <TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" />   <EventRecordID>67620</EventRecordID>   <Correlation />   <Execution ProcessID="552" ThreadID="4700" />   <Channel>Security</Channel>   <Computer>***</Computer>   <Security />   </System>   instead of this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">- <System><Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /><EventID>4625</EventID><Version>0</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords><TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" /><EventRecordID>67620</EventRecordID><Correlation /><Execution ProcessID="552" ThreadID="4700" />  <Channel>Security</Channel> <Computer>***</Computer><Security /> </System>
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 ho... See more...
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 hours to 8 hours (for near term forecast), of the more recent temperature forecast for the same sets of locations. If the recent forecast for the same location has dropped below the threshed 100 F, I need to issue an alert to cancel the previous alert. If a location's recent forecast is above 100 F, but the prior forecast was below 100 F (no alert had been issued), I need to issue a new alert for the location. Effectively, the query for near term forecast needs to access the query results of the long term query (or redo a query for the previous long term query), to compare with the recent forecast results. (I'm especially not clear how to compare two queries' results with Splunk query.) I wonder how to implement a solution with Splunk? Thanks for pointers! Let's build an example to develop the solution. Assume the operation time in question is 8:00 AM on July 14, 2022, so the 24 hour in advance long term forecast should have been made at 8:00 AM on July 13, 2022 (long term forecast)  The time window to make the short term forecast should be 0:00 AM (8-8) and 6:00 AM (8-2) (8 to 2 hours before) on the same day.  Here is more concise requirements: 1. Hourly, the forecasts of 24 hours after for all locations shall be collected and evaluated. If the 24-hour-after temperature will be over the threshold (100 F), alert shall be sent for the to-be-overheat locations. 2. Also hourly, the forecasts for the window of next 2 hours to the next 8 hours should be collected and evaluated. Based on the evaluation of the 2-hours-8-hours-after forecast, revision shall be made according to the following rules: a. If a location’s 2-hours-8-hours-after forecast is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s 2-hours-8-hours-after forecast is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed 3. At 15 minutes interval, the real time temperature for the locations shall be collected and evaluated. . Based on the evaluation of the real-time temperature, revision shall be made according to the following rules: a. If a location’s real time temperature is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s real time temperature is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down settin... See more...
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down setting, I am adding links to custom URL to each dashboards that will open unique Splunk search page. The problem is, I want to be able to change the hour to 4 hours instead of 24 from dashboard window, and when I click the bars on the grid for example, I would have Splunk search page that is 4 hours. And if I change the global time to 7 days I want to have search result with 7 days. I figured, I need to change the link with< &earliest=-4%40h&latest=now>  in the Link to Custom URL Drilldown settings. How I can relate the global_time to that timing?
I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day t... See more...
I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day they use passcode, I want to trigger an alert or show up in a report.   I'm having an issue wrapping my head around on how to search for the first instance of a new value for the field factor in the past X days without specifying the expected value ahead of time (some users use push, some use phone call, some use pass code I just want to know when they use something different.  Any assistance or tips would be helpful.
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. ... See more...
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. I get nothing for real-time unless I select all time (real-time). Under relative results I get nothing for today. Nothing for last 15 minutes, last 4 hours etc. Again, the only option that works is All time. When I'm looking at real-time results, it's about 2hr30m behind. I am using the Splunk Add-on for Cisco ASA for this index. Anyone able to assist me with what's happening here? Thanks, Will
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have ... See more...
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have an output with a table showing one column with result from offset=0 (current iteration), and another column with results from offset=1 (1 previous iteration), but I could not get this to work.  I was expecting the below: Available Columns Value from Offset=0 Value from Offset=1 # of hosts 1000 955   As an example, the current query would look like this: | loadjob artifact_offset=0 savedsearch="named_search" ```current week``` | loadjob artifact_offset=1 savedsearch="named_search" ```previous iteration``` Once the table gets figured out, I'm not sure how I could even use the data for a single value visualization, because it would need | timechart count to operate, but my "time" is the value from "artifact_offset" So, 2 things: Any help with the table to visualize differences between 2 jobs based on artifact_offset? With that table, would it even be possible to use the outputs to populate the single value visual? Any help here?  Or any other questions I need to answer?
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAM... See more...
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAMPLE-*" sourcetype=Hex4 /ps/* | rex mode=sed field=_raw "s/(\S+)(tx_\S+)(\/\S+)/\1trans\3/g" | rex mode=sed field=_raw "s/(\S+)(nce_\S+)(\/\S+)/\1nce\3/g" | rex mode=sed field=_raw "s/(\S+)(dce_\S+)(\/\S+)/\1dvc\3/g" | rex "POST (?<transact>\S+)" | stats count(eval(method="GET")) as GET, count(eval(method="POST")) as POST by transact It does bring up the transactions and columns for GET and POST, but the counts are blank so I know I'm doing something wrong.  Any help would be greatly appreciated! Thank you!
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration det... See more...
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration details. But later, i am seeing this error popped up when trying to view the set-up configuration. Any lead to this error would be helpful to better understand the issue. Error: "Unable to render setup. Most likely, the cause is that the setup.xml file for this app is not configured correctly. For example, it may not specify task and type attributes. Contact the application developer to resolve this issue. setup_stub"   Thanks in advance,      
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I w... See more...
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I would like to create a table with the latest (max) time and the sum of the count by category so that i get this   Category   | Max Time |  Sum A | t-5mins |  36 B | T-5mins | 24   I can get the max time and the sum individually into a table but am having issues getting them both into 1 table -  the time and sum values are coming up blank.  Can someone advise please?