All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi What is the different between "bin span=5m" vs "timechart span=5m"? I mean it is better to use bin span then use timechart without timechart? which one efficient? what is the different at all? ... See more...
Hi What is the different between "bin span=5m" vs "timechart span=5m"? I mean it is better to use bin span then use timechart without timechart? which one efficient? what is the different at all? Thanks,
Hi All, I need to know about EUM monitoring. Can we can set up End-user monitoring without a reverse proxy server in an on-premise setup or not. If not possible, it is necessary that the reverse ... See more...
Hi All, I need to know about EUM monitoring. Can we can set up End-user monitoring without a reverse proxy server in an on-premise setup or not. If not possible, it is necessary that the reverse proxy server should be in the DMZ zone.
Hello I am setting up netflow ticket collection on splunk. I am a very occasional user, and I come to you ask help. What interests me are specific dialogs of my network infrastructure : src... See more...
Hello I am setting up netflow ticket collection on splunk. I am a very occasional user, and I come to you ask help. What interests me are specific dialogs of my network infrastructure : src=net_A to dest=net_B or src=net_B to dest=net_A All the rest i don't want splunk to keep it and store it, for example net_B to net_B, net_B to net_C, ..... I think I must use CIRDMATCH for my need, to do the filtering I think it must be done on the forwarder but not sure Is there any possibility of doing this ? My splunk infrastructure: splunk 8.1.1 2 Forwarder 2 Indexer 2 Search Head 1 server deployment / license thank you
I made the column chart and overlaid 2 fields as line charts. Can I display these 2 overlay fields  on separate Y axes? (One field : Y1 axes and Other field : Y2 axes)
Hello Folks, We are using the NAS storage to store our Splunk Frozen data, I'd like to know, if in case the NAS storage is not available (let's say NAS server down due to reboot, or any internal ne... See more...
Hello Folks, We are using the NAS storage to store our Splunk Frozen data, I'd like to know, if in case the NAS storage is not available (let's say NAS server down due to reboot, or any internal network issue between Splunk and NAS), what will happen to the rolling data, would it hold in Splunk until the Frozen path come back online, or Splunk will just ignore and drop the data? Your response much appreciated.  
Hi, I need to add a condition to a query where I am checking whether a field equals a specific URL: URL_WITH_GET_PARAM="/support/order-status/en-ca/order-detail?v=i14miCmn%2fGNPxkcL6EcMTz2YFrQ8... See more...
Hi, I need to add a condition to a query where I am checking whether a field equals a specific URL: URL_WITH_GET_PARAM="/support/order-status/en-ca/order-detail?v=i14miCmn%2fGNPxkcL6EcMTz2YFrQ8V6c7cYOovCLWD4jQIQ5u1Trd%2bqtiMhUYGmuT&src=oc&ref=secUrlPurchase&lwp=rt&t=OS_E" However, when I add this onto my form, it is giving me invalid character error:   I know I need to escape certain characters in XML but I am unsure where .... can you please help? Many thanks!
Hello guys, I am very new to splunk enterprise so please bear with me... Just want some advice or getting started tips on how can I use splunk in company router for its event analysis. Is there... See more...
Hello guys, I am very new to splunk enterprise so please bear with me... Just want some advice or getting started tips on how can I use splunk in company router for its event analysis. Is there any specific configuration should I add to my router?  
SO I have a data set User      Vehicle User_a    Car User_b    Car User_a    MotorBike User_c    MotorBike User_d    Car User_c    Bicycle User_a    Bicycle User_c    Scooter User_e    Ca... See more...
SO I have a data set User      Vehicle User_a    Car User_b    Car User_a    MotorBike User_c    MotorBike User_d    Car User_c    Bicycle User_a    Bicycle User_c    Scooter User_e    Car What I need is to be able to run a search against this type of dataset and pull out only one return per username based upon those with a CAR, then Motorbike, then bicycle then scooter. But I only need ONE return for any given user - if they have all four - based upon priority they are reported as a car owner.  If they only have two or three of the four, they only get reported as the owner of the highest priority vehicle. I'm currently doing a search cars, score 1pt, append motobike score 2pt, and so on but that is slow on a big datasaet.  
Hi I have three search commands that need to combine result and show on a timechart. I have to count "Success",  "Fail", "Total", "Other" Here is my SPL:   index="myindex" TURNOFF NOT DISPOS... See more...
Hi I have three search commands that need to combine result and show on a timechart. I have to count "Success",  "Fail", "Total", "Other" Here is my SPL:   index="myindex" TURNOFF NOT DISPOSE AND "A[*00]" AND "Packet Processed:" source="/data/app.log*"| bin _time span=1h | stats count AS Success by _time | join _time [search index="myindex" TURNOFF NOT DISPOSE AND "Packet Processed:" source="/data/app.log*"| rex "R\[(?<R>\d+)"| search A!=*00| bin _time span=1h | stats count AS Failed by _time ] | join _time [search index="myindex" TURNOFF NOT DISPOSE AND "Packet Processed:" AND "A[*]" source="/data/app.log*"| bin _time span=1h | stats count AS Totals by _time ] | eval TotalSF=Success+Failed | eval Other=Totals-TotalSF | fields - Totals TotalSF| addtotals   it works correctly on a specific time range (like 2 or 3 hours) but when I set the time to Yesterday and it created a negative number (for field that I call Other).   Any idea? Thank
Hi, I would like to have a search bar that is not hard-coded. Right now I have:   <label>Search by Customer</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C... See more...
Hi, I would like to have a search bar that is not hard-coded. Right now I have:   <label>Search by Customer</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="All Customers">All</choice> <default>All</default>   However, this only allows me to hard-code all possible values. Once an new customer comes in, it will not capture it in the search tab. Is there a way that I can dynamically links the token to the dataset and updates the choice value automatically?   Thanks! Really appreciate it. 
Does AppD provide ability to monitor signal strength, packet loss, identify if there is wifi connectivity failure at the end user device? We wanted to monitor network issue the Point of sale tablet ... See more...
Does AppD provide ability to monitor signal strength, packet loss, identify if there is wifi connectivity failure at the end user device? We wanted to monitor network issue the Point of sale tablet used by our employees.
Hey Everyone, We are currently running into an issue with one of our sourcetypes coming in roughly five hours in the future. We have reviewed the appliance and verified that the server is set to th... See more...
Hey Everyone, We are currently running into an issue with one of our sourcetypes coming in roughly five hours in the future. We have reviewed the appliance and verified that the server is set to the correct time but we are unable to locate any issues on the appliance. We are attempting to modify inputs.conf and props.conf to manually adjust the time. We need the time to come in as central time. Example of timestamp in log     { "Timestamp": "7/1/2022 4:15:28 PM", "MessageTime": "7/1/2022 4:15:31 PM" ..........     UF local/inputs.conf     [monitor://D:\loglog\*.log] disabled=false followTail=0 index=logindex     UF local/props.conf     [source::....log] sourcetype=logsourcetype TZ=America/Chicago Time_PREFIX=Timestamp":       I have attempted to modify local/inputs.conf but reverted back to original one above.     [monitor://D:\loglog\*.log] disabled=false followTail=0 TZ=America/Chicago index=logindex     I have also review system/local to ensure that there is nothing in there as well. Any recommendations? We have been troubleshooting this timing issue for a few month now and I would like to finally find a way to resolve it on Splunk side as we appear to be unable to resolve it server side.
Hi everyone: I have a lookup I am using to filter against another lookup and I'm having trouble getting the output to appear like I want it. I've looked at many answers here and just can't figure out... See more...
Hi everyone: I have a lookup I am using to filter against another lookup and I'm having trouble getting the output to appear like I want it. I've looked at many answers here and just can't figure out what I'm doing wrong.  My search:     | inputlookup lookup_1 | table cveID asset Project product dueDate | mvcombine delim=",", asset | nomv asset | eval numberCVEs=mvcount(split(asset,",")) | appendpipe [ |inputlookup lookup_2 | fields q1-cveID ] | fillnull | table cveID, q1-cveID asset Project product dueDate       I am trying to match the cveID field of lookup_1 to the q1-cveID field of lookup_2. And if  there are cve's that exist in lookup 2 but do not exist in lookup 1, I still want them to appear with a "0" value in the asset column (thus the append). What I'm currently getting: cveID q1-cveID Asset Project product dueDate cve-1234 0 7 Microsoft Office 2022-07-01 0 cve-1234 0 0 0 0   What I would like to see: cveID q1-cveID Asset Project product dueDate cve-1234 cve-1234 7 Microsoft Office 2022-07-01 cve-5678 cve-5678 0 Apple iPhone 2022-08-01   How do I match up these two lookups and create a "0" value for items that exist only in lookup 2?
Hello, How would I know my Linux and Windows machine have UF/HF installed on them? One of them is installed in my machines for sure that I know.... but how I would know which one (UF or HF) .... Wh... See more...
Hello, How would I know my Linux and Windows machine have UF/HF installed on them? One of them is installed in my machines for sure that I know.... but how I would know which one (UF or HF) .... What is the indicator if UF is there and What is the indicator if HF is there? I would appreciate your help on it and thank you so much. 
Hello, We are onboarding json formatted data and the data is written to a file on a server using a process called  Periodic Stats Logger. The file has initial header line where splunk is not recogn... See more...
Hello, We are onboarding json formatted data and the data is written to a file on a server using a process called  Periodic Stats Logger. The file has initial header line where splunk is not recognizing as Json file.I want to exclude that line and want the data to be loaded to a metric index. I created a metric index and also added null queue to exclude first header line but still I dont see any metric data coming into metric index. Configuration on Indexer(Standalone): Props.conf: [log2metrics_json] TRANSFORMS-t1=eliminate_header Transforms.conf: [eliminate_header] REGEX= file DEST_KEY=queue FORMAT=nullQueue Configuration on Forwarder: Inputs.conf [monitor:///opt/logs/dsstats.json] sourcetype = log2metrics_json index = pdmetrics disabled = false Props.conf [log2metrics_json] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:log2metrics_default_json NO_BINARY_CHECK = true category = Log to Metrics pulldown_type = 1 description = JSON-formatted data. Log-to-metrics processing converts the numeric values in json keys into metric data points. Sample Raw file: # Log file created at 25/Jun/2022:23:59:22 -0400 after rotating from file dsstats.json.20220626035922Z {"Datetime":"2022-06-26T03:59:23.000723Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.2666015625,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} {"Datetime":"2022-06-26T03:59:24.000382Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.3291015625,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} {"Datetime":"2022-06-26T03:59:25.000195Z","Num Ops in Progress (Avg)":0,"Num Ops in Progress (Max)":0,"All Ops":0,"All Avg Time (ms)":0.0,"All Ops/sec":0.0,"Search Ops":0,"Search Avg Time (ms)":0.0,"Search Ops/sec":0.0,"Modify Ops":0,"Modify Avg Time (ms)":0.0,"Modify Ops/sec":0.0,"Add Ops":0,"Add Avg Time (ms)":0.0,"Add Ops/sec":0.0,"Delete Ops":0,"Delete Avg Time (ms)":0.0,"Delete Ops/sec":0.0,"ModifyDN Ops":0,"ModifyDN Avg Time (ms)":0.0,"ModifyDN Ops/sec":0.0,"Bind Ops":0,"Bind Avg Time (ms)":0.0,"Bind Ops/sec":0.0,"Compare Ops":0,"Compare Avg Time (ms)":0.0,"Compare Ops/sec":0.0,"Extended Ops":0,"Extended Avg Time (ms)":0.0,"Extended Ops/sec":0.0,"Num Current Conn":389.0,"Num New Conn":0,"Backend userRoot Entry Count":957545.0,"Backend userRoot DB Cache % Full":11.0,"Backend userRoot Size on Disk (GB)":1.1374394875019789,"Backend userRoot Active Cleaner Threads (Avg)":0,"Backend userRoot Cleaner Backlog (Avg)":0,"Backend userRoot Cleaner Backlog (Min)":0,"Backend userRoot Cleaner Backlog (Max)":0,"Backend userRoot Nodes Evicted":0,"Backend userRoot Checkpoints":0,"Backend userRoot Average Checkpoint Duration":0,"Backend userRoot New DB Logs":0,"Replica dc=mediacom,dc=com Replication Backlog (Avg)":0,"Replica dc=mediacom,dc=com Replication Backlog (Min)":0,"Replica dc=mediacom,dc=com Replication Backlog (Max)":0,"Replica dc=mediacom,dc=com Sent Updates":0,"Replica dc=mediacom,dc=com Received Updates":0,"Replica dc=mediacom,dc=com Failed Replayed Updates":0,"Replica dc=mediacom,dc=com Unresolved Naming Conflicts":0,"Replica dc=mediacom,dc=com Replication Backlog Oldest Pending Change":0,"Backend replicationChanges DB Cache % Full":2.0,"Backend replicationChanges Active Cleaner Threads (Avg)":0,"Backend replicationChanges Cleaner Backlog (Avg)":0,"Backend replicationChanges Cleaner Backlog (Min)":0,"Backend replicationChanges Cleaner Backlog (Max)":0,"Worker Thread % Busy (Avg)":0,"Worker Thread % Busy (Max)":0,"Queue Size (Avg)":0,"Queue Size (Max)":0,"JVM Memory Used GB":12.4052734375,"JVM Memory Percent Full":56.0,"Memory Consumer Total GB":1.2536366451531649,"Memory Consumer Percentage of Total Memory":5.0,"g1-young-generation minor-collection g1-evacuation-pause Count":0,"g1-young-generation minor-collection metadata-gc-threshold Count":0,"g1-young-generation minor-collection g1-evacuation-pause Duration (ms)":0,"g1-young-generation minor-collection metadata-gc-threshold Duration (ms)":0,"g1-eden-space Live Bytes":0,"g1-survivor-space Live Bytes":0,"g1-old-gen Live Bytes":0,"g1-young-generation minor-collection gclocker-initiated-gc Count":0,"g1-young-generation minor-collection gclocker-initiated-gc Duration (ms)":0,"All Ops 0-1 ms":0,"All Ops 1-2 ms":0,"All Ops 2-3 ms":0,"All Ops 3-5 ms":0,"All Ops 5-10 ms":0,"All Ops 10-20 ms":0,"All Ops 20-30 ms":0,"All Ops 30-50 ms":0,"All Ops 50-100 ms":0,"All Ops 100-1000 ms":0,"All Ops > 1000 ms":0} Please do the needful Help.Thanks in advance
Hello Splunkers, Is a splunk forwarder required to send data to splunk from a switch or router? Can I configure the the device to send logs directly to the splunk like using port 514. Like in a... See more...
Hello Splunkers, Is a splunk forwarder required to send data to splunk from a switch or router? Can I configure the the device to send logs directly to the splunk like using port 514. Like in a cisco config - "logging host", etc   Thanks EWH
Does splunk have an add on for CISCO VTC devices. I am getting logs but information is not human readable
Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Es... See more...
Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Especially for scheduled searches, where no user gets a feedback, this would be helpful. Is there any log channel, where this is logged? I tried starting splunk in debug-mode but found nothing. The only place are the search-logs, but indexing every search-log seems a bit oversized. Any one out there who could provide a hint? Thanks!
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to app... See more...
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to apply a formula if said value drops 10% of its measurement. When I query the TXT file which displays a list as follows in the events: 2022-7-1 11:00:0 OVERALL: 10000 2022-7-1 12:00:0 OVERALL: 11000 I just need to get the last numeric value and the penultimate numeric value registered in the list and add them to a variable to apply the formula of comparing these two values to see if there is a difference of more than 10%. Please, if you have had a similar case, please share the solution.
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am c... See more...
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am confused