All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am a noob to Enterprise Security. We recently had a PA event, and the matter of FIFO exceptions for PA devices came up. Someone observed that it would be pretty cool if we could alert on that, and... See more...
I am a noob to Enterprise Security. We recently had a PA event, and the matter of FIFO exceptions for PA devices came up. Someone observed that it would be pretty cool if we could alert on that, and then someone else said, "Sadly, PA firewalls don't let us see that data." I am neither a network engineer nor an Enterprise Security, but I did poke around online and found a question related to a PA metric (https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000PLJ7CAO) 'rcv_fifo_overrun'. Can anyone direct me to a query or data model that contains that field?
1. We tried creating service template and linked a service to it.   But there is no option unlink . If we opt for delete its gonna delete the service .   FYI: I know there is an option to modify ... See more...
1. We tried creating service template and linked a service to it.   But there is no option unlink . If we opt for delete its gonna delete the service .   FYI: I know there is an option to modify the service and link between that service and service template will be broken.  It would be easier if we can add unlink option   
How to add a click event on a map to pass data to a second dashboard. This is data is not available in the Geostats command itself. Thanks.  
Hello! So I have a panel in my dashboard that pulls in data from a lookup table to display the contents of three separate products I have in a line chart. This line chart just counts and sums up the ... See more...
Hello! So I have a panel in my dashboard that pulls in data from a lookup table to display the contents of three separate products I have in a line chart. This line chart just counts and sums up the totals for each product for each day. I want to be able to click on any of the three products for any given day and have that populate the individual event records of those contents for that product on that day. I've been trying to work with the documentation and the example dashboard on Splunk but can't seem to get it to work. I've included an example table from my lookup below. Any feedback would be great. Thanks! date count usd_total product_name 1612180800 24 240 product_1 1612267200 18 360 product_2 1612353600 30 900 product_3 1612440000 11 110 product_1 1612526400 11 220 product_2 1612612800 37 1110 product_3 1612699200 43 430 product_1 1612785600 27 540 product_2 1612872000 15 450 product_3 1612958400 47 470 product_1 1613044800 12 240 product_2 1613131200 46 1380 product_3
 index=graphsecurityalert having information's about all attacks in "title" field index=zscaler having information's about all IP & location  but it don't have logs about attacks. Now i want query ... See more...
 index=graphsecurityalert having information's about all attacks in "title" field index=zscaler having information's about all IP & location  but it don't have logs about attacks. Now i want query to find in geo map with IP and title of the attack. i tried index=graphsecutity OR index=zscaler title=* | iplocation src_ip | geostats count by userStates{}.logonLocation  but i am unable to get results.   Please help me with query to find IP attacks in geo map @soutamo @saravanan90 @thambisetty @ITWhisperer @gcusello @bowesmana   @to4kawa 
it should look like below 2  search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with search query index = x  | search employe... See more...
it should look like below 2  search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with search query index = x  | search employeeid =123 index= x | search app = abc @scelikok  @woodcock Please help on this. Thanks in advance
I am currently ingesting tickets from Zendesk. I pull in data from the previous day, one script for each: Tickets: Any created, in the time range Audits: Any updated, in the time range Tickets a... See more...
I am currently ingesting tickets from Zendesk. I pull in data from the previous day, one script for each: Tickets: Any created, in the time range Audits: Any updated, in the time range Tickets are simple, no nested fields. It gives information like Created Date Subject Status Requester Audits are more complicated and contain the remaining information. This could be anything from changing the status to adding a comment. Here's an example:     { "id": 1234567, "ticket_id": 1111, "created_at": "2021-02-15T08:03:15Z", "author_id": -1, "metadata": { "system": {}, "custom": {} }, "events": [ { "id": 5555555, "type": "Change", "value": "closed", "field_name": "status", "previous_value": "solved" } ], "via": { "channel": "rule", "source": { "to": {}, "from": { "deleted": false, "title": "Notify all users on status change", "id": 3333333 }, "rel": "automation" } } }     What's the best way to tie this into the data model? I certainly could modify my script to transform the data before ingesting, but I'd prefer Splunk to do the heavy lifting. I'd like to be able to merge in things like comments and time tracking (that data would be under the events array with a unique field_name). For example, only items in the events array that contain field_name "comments" should become "All_Ticket_Management.comments" for the ticket that matches its ID.
I am searching for the best way to create a time chart that is created from queries that have to evaluate data over a period of time.  The item I am counting is vulnerability data and that data is bu... See more...
I am searching for the best way to create a time chart that is created from queries that have to evaluate data over a period of time.  The item I am counting is vulnerability data and that data is built from scan outputs that occur at different times across different assets throughout the week.  So for instance: If I ran this query over the past 7 days for today:   index="qualys" sourcetype="qualys:hostdetection" TYPE="CONFIRMED"  OS=[OS]  PATCHABLE=YES | dedup HOST_ID QID  sortby -_time | search NOT STATUS=FIXED | stats count by severity   I would get back information on all open vulnerabilities by severity (critical, high, medium, low) that are considered opened.   I now need to show that trend, but over a 14 day period in a timechart - with the issue being that any one day has to be a 7 day lookback to get the accurate total.  I thought of using a macro then doing an append, but that seems expensive.  I also considered using the query over and over with earliest=-7d@d latest=[-appropriate day count].    I am sure there is a more elegant way though.  Any advice is greatly appreciated.   
  I am performing a query to generate a chart. The query time range is the previous 7 days, when I use this time range I get the error message that I attach, but when I lower the time to 5 or 4 day... See more...
  I am performing a query to generate a chart. The query time range is the previous 7 days, when I use this time range I get the error message that I attach, but when I lower the time to 5 or 4 days if I get the information. By discard it is because of the time it is taking, I don't know if I'm wrong but there is some configuration that limits a maximum time in seconds until it generates a take out or cancels it splunk. Someone suggested that I review the limits.conf file, but when I review the documentation, I don't see which stanza I should modify. I appreciate if someone can guide me https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Limitsconf#.5Bsearch.5D
Hi all, I am new to Splunk deployment and I am researching a possibility of integrating Tenable products with Splunk. I know that it has to be a Tenable.io or Tenable.sc product that works with the ... See more...
Hi all, I am new to Splunk deployment and I am researching a possibility of integrating Tenable products with Splunk. I know that it has to be a Tenable.io or Tenable.sc product that works with the Splunk Add-On. However, does it work for Splunk in general or only with Splunk Enterprise? Thanks in advance.
I'm looking to do some alerting or analysis to help troubleshoot lag time and logging. I'd like to compare the _indextime and _time fields to see how long it's taking the actual events to get indexed... See more...
I'm looking to do some alerting or analysis to help troubleshoot lag time and logging. I'd like to compare the _indextime and _time fields to see how long it's taking the actual events to get indexed by Splunk. We have some users for 1 specific index that are stating they are seeing at least a couple of hours lag time between the event being generated and when Splunk is indexing the event. This is for initial research for the issue to help determine network issue, Splunk issue or other. Thanks for any help!
I'm looking for a way to compare the data from the package.sh script for multiple servers. I'm running the script every 12 hours I'm currently doing the following but it doesn't break out the data b... See more...
I'm looking for a way to compare the data from the package.sh script for multiple servers. I'm running the script every 12 hours I'm currently doing the following but it doesn't break out the data by server sourcetype=package host=hostname OR host=hostname earliest=-12h@d latest=now | eval output = toString(VERSION) + " - " + toString(RELEASE) | makemv delim=";" output | mvexpand output | eval Day=if(_time<relative_time(now(), "@d"),"Previous","Current") | rename NAME as Package | chart values(output) over Package by Day | where Previous!=Current Thanks
I need to implement splunk but the client does not want the windows and linux sources to send the logs directly to the indexer, they want an intermediate server to collect the logs from all sources, ... See more...
I need to implement splunk but the client does not want the windows and linux sources to send the logs directly to the indexer, they want an intermediate server to collect the logs from all sources, syslog, windows, linux and databases I ever saw a video where they mention an intermediate Forwarder which rides with a Heavy Forwarder It's possible? For the windows agent I only put the IP of the HF in the assistant? For the linux agent I configure outputs.conf to the IP of the HF? What other considerations should I keep in mind?
I'm trying to make a table of bookings from the whole 2019, my search is working as expected except for one column. I've made a deep search and tried with convert, rename and eval functions but none... See more...
I'm trying to make a table of bookings from the whole 2019, my search is working as expected except for one column. I've made a deep search and tried with convert, rename and eval functions but none of them are working for me (at least the way I'm using them). This is my search and the result of my table:       index=myIndex host=myHost confirmationNumber step_code="'BOOKING_DONE'" earliest=01/01/2019:00:00:00 latest=12/31/2019:00:00:00 | spath | timechart span=1mon count by Resort limit=0 | addtotals | addcoltotals | eval Month=strptime(_time,"%M") | table _time, 'BBO', 'BNG', 'BRP', 'BTC', 'INN', 'NGA', 'SAT', 'SBD', 'SBR', 'SEB', 'SGL', 'SGO', 'SHC', 'SLS', 'SLU', 'SMB', 'SNG', 'SRB', 'SRC', 'SWH', Total | rename _time AS Month         PD: Also trying to add a label to the last empty row and change it's name to "Total per Resort"
I want to know whether Batch logging helps in reducing Splunk cloud cost?  
Hello, I have a Heavy Forwarder on which I receive logs via Splunk for AWS addon as they appear in my S3 bucket. I know I will be receiving log files ending with `*_connectionlog_*.gz`, `*_userlog_... See more...
Hello, I have a Heavy Forwarder on which I receive logs via Splunk for AWS addon as they appear in my S3 bucket. I know I will be receiving log files ending with `*_connectionlog_*.gz`, `*_userlog_*.gz` and `*_useractivitylog_*.gz`. My current input definition looks like this: [aws_sqs_based_s3://MyApp_RedshiftAuditLogs] aws_account = redacted index = myapp_redshiftindex interval = 300 s3_file_decoder = CustomLogs sourcetype = aws:cloudtrail sqs_batch_size = 10 sqs_queue_region = redacted sqs_queue_url = https://redacted.url disabled = 0   The problem is that there is a `sourcetype` field, that has single value here. I would like to assign this value based on whether there is useractivitylog, connectivitylog or userlog in the filename that just came in on the heavyforwarder. The vision here is that I will further research how to properly parse and extract each of these types of logs, so when I will be eventually searching the index, I will have extracted some (if not all) of the fields - but I am not at this stage yet. Questions: 1. Am I approaching this correctly by wanting to assign different source types to files that are structured differently? 2. How do I do this assigning thing? 3. Will the path that you propose enable me to write some parsing/extraction logic later down the road? Thank you for your time!
I'm trying to get the results of a script which outputs a largeish table into splunk, but something isn't right in the way that the results are being split into different events. I want the complete... See more...
I'm trying to get the results of a script which outputs a largeish table into splunk, but something isn't right in the way that the results are being split into different events. I want the complete table (about 100 lines) to be contained in one event so I can do magic with a multikv command.  At the moment, each run is spilt across events - some are 60+ lines, some a single lines and some between those.   The actual script is being run on a search head, which has all it's outputs being forwarded to the indexer. The script should starts output with the literal characters BOF and end EOF - this works fine when run directly.  Config files below: inputs.conf: [script://$SPLUNK_HOME/etc/apps/stem-snmp/bin/stem-snmptable.sh] disabled=false index=main interval=60 sourcetype=stem-snmptable props.conf: [stem-snmptable] DATETIME_CONFIG = CURRENT EVENT_BREAKER_ENABLE = true EVENT_BREAKER = "(EOF)" NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 disabled = false   On the indexer  I have the following in a custom app local folder (is this right?)   [stem-snmptable] NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 disabled = false MUST_BREAK_AFTER = "(EOF)" MUST_NOT_BREAK_AFTER = "(BOF)" DATETIME_CONFIG = CURRENT   So, where have I gone wrong. Do I need to put the indexer props.conf in a different location? Have I misunderstood the break and linemerge configs?   Any help much appreciated.    
I am trying to upload documents from a user who's log files has multiple dots in the naming convention: The logs have a *.0.1.log in its file extension, would it be possible to use *.*.*.log ins... See more...
I am trying to upload documents from a user who's log files has multiple dots in the naming convention: The logs have a *.0.1.log in its file extension, would it be possible to use *.*.*.log instead? The logs have a *-20210218.000023-5644-5716.0.log in its file extension, would it be possible to use *.*.*.*.log instead? The logs have a *-20201218.105324-11260-11240.0.log in its file extension, would it be possible to use *.*.*.log instead? The logs have a *-20210209.145105-16220-11864.0.log in its file extension, would it be possible to use *.*.*.log instead? The logs have a *-20201218.105324-25876-14744.0.log in its file extension, would it be possible to use *.*.*.log instead?   Is there a way to get all of these from .log to .*.*.*.log into Splunk using one monitoring stanza?
I would like to use the following command in order to compare the process_exec with the comparisonterm. | lookup ut_levenshtein_lookup word1 as process_exec, word2 as comparisonterm However, I wo... See more...
I would like to use the following command in order to compare the process_exec with the comparisonterm. | lookup ut_levenshtein_lookup word1 as process_exec, word2 as comparisonterm However, I would like the comparisonterm to contain a list of processes that will be inside a lookup table. comparisonterm process1 process2 process3
Hi,   I'm running Splunk Enterprise v7.0.1 (Indexer) on a separate Linux server with Splunk Forwarders on two more Linux servers that are forwarding data to the Indexer.   I would like to monitor... See more...
Hi,   I'm running Splunk Enterprise v7.0.1 (Indexer) on a separate Linux server with Splunk Forwarders on two more Linux servers that are forwarding data to the Indexer.   I would like to monitor; 1)CPU Usage, 2)RAM Usage 3)Hard Disk Utilization 4)Load Average 5)Largest Files 6)LAN Card Traffic The monitoring Console on the Indexer fails to show these metrics for all other instances bar its local.   Is there any way to monitor these metrics for the forwarders as well as the localhost?   I appreciate whoever is willing to help. Thanks and regards.