All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have three search commands that need to combine result and show on a timechart. I have to count "Success",  "Fail", "Total", "Other" Here is my SPL:   index="myindex" TURNOFF NOT DISPOS... See more...
Hi I have three search commands that need to combine result and show on a timechart. I have to count "Success",  "Fail", "Total", "Other" Here is my SPL:   index="myindex" TURNOFF NOT DISPOSE AND "A[*00]" AND "Packet Processed:" source="/data/app.log*"| bin _time span=1h | stats count AS Success by _time | join _time [search index="myindex" TURNOFF NOT DISPOSE AND "Packet Processed:" source="/data/app.log*"| rex "R\[(?<R>\d+)"| search A!=*00| bin _time span=1h | stats count AS Failed by _time ] | join _time [search index="myindex" TURNOFF NOT DISPOSE AND "Packet Processed:" AND "A[*]" source="/data/app.log*"| bin _time span=1h | stats count AS Totals by _time ] | eval TotalSF=Success+Failed | eval Other=Totals-TotalSF | fields - Totals TotalSF| addtotals   it works correctly on a specific time range (like 2 or 3 hours) but when I set the time to Yesterday and it created a negative number (for field that I call Other).   Any idea? Thank
Hi, I would like to have a search bar that is not hard-coded. Right now I have:   <label>Search by Customer</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C... See more...
Hi, I would like to have a search bar that is not hard-coded. Right now I have:   <label>Search by Customer</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="All Customers">All</choice> <default>All</default>   However, this only allows me to hard-code all possible values. Once an new customer comes in, it will not capture it in the search tab. Is there a way that I can dynamically links the token to the dataset and updates the choice value automatically?   Thanks! Really appreciate it. 
Does AppD provide ability to monitor signal strength, packet loss, identify if there is wifi connectivity failure at the end user device? We wanted to monitor network issue the Point of sale tablet ... See more...
Does AppD provide ability to monitor signal strength, packet loss, identify if there is wifi connectivity failure at the end user device? We wanted to monitor network issue the Point of sale tablet used by our employees.
Hey Everyone, We are currently running into an issue with one of our sourcetypes coming in roughly five hours in the future. We have reviewed the appliance and verified that the server is set to th... See more...
Hey Everyone, We are currently running into an issue with one of our sourcetypes coming in roughly five hours in the future. We have reviewed the appliance and verified that the server is set to the correct time but we are unable to locate any issues on the appliance. We are attempting to modify inputs.conf and props.conf to manually adjust the time. We need the time to come in as central time. Example of timestamp in log     { "Timestamp": "7/1/2022 4:15:28 PM", "MessageTime": "7/1/2022 4:15:31 PM" ..........     UF local/inputs.conf     [monitor://D:\loglog\*.log] disabled=false followTail=0 index=logindex     UF local/props.conf     [source::....log] sourcetype=logsourcetype TZ=America/Chicago Time_PREFIX=Timestamp":       I have attempted to modify local/inputs.conf but reverted back to original one above.     [monitor://D:\loglog\*.log] disabled=false followTail=0 TZ=America/Chicago index=logindex     I have also review system/local to ensure that there is nothing in there as well. Any recommendations? We have been troubleshooting this timing issue for a few month now and I would like to finally find a way to resolve it on Splunk side as we appear to be unable to resolve it server side.
Hi everyone: I have a lookup I am using to filter against another lookup and I'm having trouble getting the output to appear like I want it. I've looked at many answers here and just can't figure out... See more...
Hi everyone: I have a lookup I am using to filter against another lookup and I'm having trouble getting the output to appear like I want it. I've looked at many answers here and just can't figure out what I'm doing wrong.  My search:     | inputlookup lookup_1 | table cveID asset Project product dueDate | mvcombine delim=",", asset | nomv asset | eval numberCVEs=mvcount(split(asset,",")) | appendpipe [ |inputlookup lookup_2 | fields q1-cveID ] | fillnull | table cveID, q1-cveID asset Project product dueDate       I am trying to match the cveID field of lookup_1 to the q1-cveID field of lookup_2. And if  there are cve's that exist in lookup 2 but do not exist in lookup 1, I still want them to appear with a "0" value in the asset column (thus the append). What I'm currently getting: cveID q1-cveID Asset Project product dueDate cve-1234 0 7 Microsoft Office 2022-07-01 0 cve-1234 0 0 0 0   What I would like to see: cveID q1-cveID Asset Project product dueDate cve-1234 cve-1234 7 Microsoft Office 2022-07-01 cve-5678 cve-5678 0 Apple iPhone 2022-08-01   How do I match up these two lookups and create a "0" value for items that exist only in lookup 2?
Hello, How would I know my Linux and Windows machine have UF/HF installed on them? One of them is installed in my machines for sure that I know.... but how I would know which one (UF or HF) .... Wh... See more...
Hello, How would I know my Linux and Windows machine have UF/HF installed on them? One of them is installed in my machines for sure that I know.... but how I would know which one (UF or HF) .... What is the indicator if UF is there and What is the indicator if HF is there? I would appreciate your help on it and thank you so much. 
Hello, We are onboarding json formatted data and the data is written to a file on a server using a process called  Periodic Stats Logger. The file has initial header line where splunk is not recogn... See more...
Hello, We are onboarding json formatted data and the data is written to a file on a server using a process called  Periodic Stats Logger. The file has initial header line where splunk is not recognizing as Json file.I want to exclude that line and want the data to be loaded to a metric index. I created a metric index and also added null queue to exclude first header line but still I dont see any metric data coming into metric index. Configuration on Indexer(Standalone): Props.conf: [log2metrics_json] TRANSFORMS-t1=eliminate_header Transforms.conf: [eliminate_header] REGEX= file DEST_KEY=queue FORMAT=nullQueue Configuration on Forwarder: Inputs.conf [monitor:///opt/logs/dsstats.json] sourcetype = log2metrics_json index = pdmetrics disabled = false Props.conf [log2metrics_json] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:log2metrics_default_json NO_BINARY_CHECK = true category = Log to Metrics pulldown_type = 1 description = JSON-formatted data. Log-to-metrics processing converts the numeric values in json keys into metric data points. Sample Raw file: # Log file created at 25/Jun/2022:23:59:22 -0400 after rotating from file dsstats.json.20220626035922Z {"Datetime":"2022-06-26T03:59:23.000723Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.2666015625,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} {"Datetime":"2022-06-26T03:59:24.000382Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.3291015625,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} {"Datetime":"2022-06-26T03:59:25.000195Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.4052734375,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} Please do the needful Help.Thanks in advance
Hello Splunkers, Is a splunk forwarder required to send data to splunk from a switch or router? Can I configure the the device to send logs directly to the splunk like using port 514. Like in a... See more...
Hello Splunkers, Is a splunk forwarder required to send data to splunk from a switch or router? Can I configure the the device to send logs directly to the splunk like using port 514. Like in a cisco config - "logging host", etc   Thanks EWH
Does splunk have an add on for CISCO VTC devices. I am getting logs but information is not human readable
Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Es... See more...
Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Especially for scheduled searches, where no user gets a feedback, this would be helpful. Is there any log channel, where this is logged? I tried starting splunk in debug-mode but found nothing. The only place are the search-logs, but indexing every search-log seems a bit oversized. Any one out there who could provide a hint? Thanks!
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to app... See more...
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to apply a formula if said value drops 10% of its measurement. When I query the TXT file which displays a list as follows in the events: 2022-7-1 11:00:0 OVERALL: 10000 2022-7-1 12:00:0 OVERALL: 11000 I just need to get the last numeric value and the penultimate numeric value registered in the list and add them to a variable to apply the formula of comparing these two values to see if there is a difference of more than 10%. Please, if you have had a similar case, please share the solution.
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am c... See more...
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am confused
Hi, In Splunk, are saved searches and reports the same thing? Thanks, Patrick
Hello I need help. I am looking for a SPL query or app that visualizes log sources not sending logs to the SIEM within in 24 hours.   Can anyone assist? Thank you in advance!
I have an index: an_index , there's a field with URLs - URL/folder/folder   I only want to list the records that contain a specific URL.  I don't care about anything after the URL.  I just want to ma... See more...
I have an index: an_index , there's a field with URLs - URL/folder/folder   I only want to list the records that contain a specific URL.  I don't care about anything after the URL.  I just want to match the URL
Hi Folks,   I have a question, consider I have a simple architecture with 1 search head and 1 indexer. I need to move a single indexer to clustered indexer. I know I need to add another twin inde... See more...
Hi Folks,   I have a question, consider I have a simple architecture with 1 search head and 1 indexer. I need to move a single indexer to clustered indexer. I know I need to add another twin indexer server and cluster manager but my question is: when I transform the single indexer to the clustered indexer I will lost all my old data? or the old data will replicate to the new indexer? How do I manage the indexes.conf in that case?  
Hi, I have 5 multiselect input filed in dashboard as below Environment, field2,field3, field4,field5 After selecting the Environment value in the Environment field, the dashboard immediately st... See more...
Hi, I have 5 multiselect input filed in dashboard as below Environment, field2,field3, field4,field5 After selecting the Environment value in the Environment field, the dashboard immediately starts searching. But our requirement is the dashboard will not start searching until the Environment has Values and either of  remaining 4 fields(field2 OR 3OR 4OR 5) has values. kindly help me on this on how to pass token to achieve this result. Thanks in advance!      
Can someone help me pull out these data points: cw.pptx;text.html;text.txt I need it to split at the ; mark but have them in their own fields so I can only see the .txt in my search.
Hello, Trying to find an efficient way to take the results from a dbxlookup - where a single userID would bring back more than one record, -  into multiple multiple row output. Example: I have a ... See more...
Hello, Trying to find an efficient way to take the results from a dbxlookup - where a single userID would bring back more than one record, -  into multiple multiple row output. Example: I have a list of 10 userIDs and run a dbxlookup against a d/b containing login/logout times. I want to see how many times each userID logged in/out, as well as their first login/out of the month, and their most recent login/out. I will supply my SPL shortly, but I wanted to see if anyone might have experienced this issue in the past and has a solution. Thanks and God bless, Genesius
Hello. i've been looking around a bit but it appears my google-fu isnt up to snuff for this problem. i'm wondering how one can parse non-pure xml logs. as in we have a ".txt" file that timestam... See more...
Hello. i've been looking around a bit but it appears my google-fu isnt up to snuff for this problem. i'm wondering how one can parse non-pure xml logs. as in we have a ".txt" file that timestamps, debug messages and all that jazz. but it also writes entire XML responses in the log file. i dont know if the source log is able to separate it out (doubt it) but the question goes as follows. i'm aware there is a XML parser for the props.conf. but does that work on intermittent xml formatted log entries? <date>,source,debug: event happened <date>,source,debug: function executed <date>,source,debug: send data to URI <date>,source,debug: multiline XML response <date>,source,log message the xml response can sometimes be longer than 1000 lines which is where the issue arrives, as we get an error due to the log entry being more than 1000 lines long. it's all the xml tags <meta> </meta> <link> </link> and the like. does props support something in the line of if XML do XML parsing else do normal props things or do I have to parse ell the XML response segments manually?