All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello there, I have been trying to use splunk check-integrity  to check the integrity of some indexes. I have the error: Integrity check error for bucket with path=..\Splunk\var\lib\splunk\defaul... See more...
Hello there, I have been trying to use splunk check-integrity  to check the integrity of some indexes. I have the error: Integrity check error for bucket with path=..\Splunk\var\lib\splunk\defaultdb\db\b_1668784284_1668613842_9, Reason=Journal has no hashes. I do know that the events indexed before adding  enableDataIntegrityControl=true to indexes.conf are not treated, but I thought the buckets created after the use of enableDataIntegrityControl=true would have a hash. bur it is not the case. Am I missing somthing?    Thank you.
Hello All,  When using the "stats count by column1, column2, column3, column4" I get the below result  Existing table :  column1 column2 column3 column4 XXXXXXXX YYYYY A... See more...
Hello All,  When using the "stats count by column1, column2, column3, column4" I get the below result  Existing table :  column1 column2 column3 column4 XXXXXXXX YYYYY A 123 XXXXXXXX YYYYY B 123 XXXXXXXX YYYYY C 123 XXXXXXXX YYYYY D 123 XXXXXXXX YYYYY E 123   Where as I need this result : column1 column2 column3 column4 XXXXXXXX YYYYY A 123     B       C       D       E     Could somebody please help me with the query?  Thanks, 
  I have a log file with events that indicate activities in a server. I am interested in the Login and Logout activities - I need to create a report of active sessions. I managed to order the event... See more...
  I have a log file with events that indicate activities in a server. I am interested in the Login and Logout activities - I need to create a report of active sessions. I managed to order the events so that I can get Login-Logout events consecutively for each user.    
Hi, splunkers I am just wonder this phase. But it's not critical for me now. I try to integrate apache web server and splunk itsi. But I stuck in creating services phase because | savedsearch DA-I... See more...
Hi, splunkers I am just wonder this phase. But it's not critical for me now. I try to integrate apache web server and splunk itsi. But I stuck in creating services phase because | savedsearch DA-ITSI-WEBSERVER-WebServer_Entity_Search is not giving result.   Actually my real question is about KPI searches. I want to show Website 4xx erros in default services page but in Website-4xx Errors search includes tag=activity, so I don't find these tag any add-on. For these reason My web site 4xx error search not working properly. How I fix these issue ? Also these tag not including in Splunk ITSI content pack.     Thank you.  
Hi everyone, I try to set an attribute to true for all elements having a certain ID, when 2 defined activities are available for that certain ID. In my optinion the corresponding SQL query would be... See more...
Hi everyone, I try to set an attribute to true for all elements having a certain ID, when 2 defined activities are available for that certain ID. In my optinion the corresponding SQL query would be:   Update t set isvalid = true where id in (select id from t group by id having activity = 'a' and activity = 'b')   A result might look like: id activitiy istrue 001 a true 001 b true 001 c true 002 a false 002 c false 002 d false 003 a true 003 b true   Is there an option to execute this in SPL?   Thanks, Lukas
Hi, I have a dashboard with a base search a number of chain searches. My base search is very long and the chain searches are a just different stats commands. However, the dashboard does not render ... See more...
Hi, I have a dashboard with a base search a number of chain searches. My base search is very long and the chain searches are a just different stats commands. However, the dashboard does not render the results unless I place a stats command also in the base search. This where I am running into trouble as I need to find a stats command that is generic enough to go before all the unique stats command for each panel. Example, Base search: index = ABC ....... Chain search1: | stats count by XYZ| head 10 Chain search2: | stats count by MNO| head 10 This renders when I open the query in "Open in Search" but no results are generated for any panel on the dashboards for the same queries. The dashboard panels only render when I add a stats command at the base search like Base search: index = ABC ....... |stats count by GHI, However, this stats query on the base search precludes me fro adding individual stats command for each panel. Is there any generic stats command I can add to the base search? Thanks!
Hi. I'm trying to apply a rule for dropping and, meanwhile, get only some events in Indexers. Here we are, props.conf     [mysourcetype] TRANSFORMS-filter = drop     transforms.conf ... See more...
Hi. I'm trying to apply a rule for dropping and, meanwhile, get only some events in Indexers. Here we are, props.conf     [mysourcetype] TRANSFORMS-filter = drop     transforms.conf     [drop] REGEX = drop_event1|drop_event2|drop_eventX DEST_KEY = queue FORMAT = nullQueue       This is the standard way for dropping. And it works!   But, at the same time, i can't get a way to make both work with drop and get transformation, props.conf     [mysourcetype] TRANSFORMS-filter = drop,filter     transforms.conf     [drop] REGEX = drop_event1|drop_event2|drop_eventX DEST_KEY = queue FORMAT = nullQueue [filter] REGEX = get_event1|get_event2|get_eventX DEST_KEY = queue FORMAT = indexQueue       I would like to explain Splunk 8, FIRST: drop all events containing pattern regex "drop_event1|drop_event2|drop_eventX" SECOND: get only events containing pattern regex "get_event1|get_event2|get_eventX" It does not work! Splunk, after correctly dropping, gets all (".*"), except as said "drop_event1|drop_event2|drop_eventX" Any suggestion?
I am using outlier visualization in my dashboard to detect outliers during business hours from 5A.M to 7P.M. But when i run query using other visualizations like line chart etc display correct time i... See more...
I am using outlier visualization in my dashboard to detect outliers during business hours from 5A.M to 7P.M. But when i run query using other visualizations like line chart etc display correct time in tooltip but when i use Outlier visualization it display wrong time in tooltip... but the graph values are correct in visualization how to solve this is issue?
what is the cause and solution for the following error? ERROR HttpClientRequest - HTTP client error=Connection closed by peer while accessing server=https://aws-ix-s2ioadata-backet-598294183213.s3-... See more...
what is the cause and solution for the following error? ERROR HttpClientRequest - HTTP client error=Connection closed by peer while accessing server=https://aws-ix-s2ioadata-backet-598294183213.s3-ap-northeast-1.amazonaws.com for request=https://aws-ix-s2ioadata-backet-598294183213.s3-ap-northeast-1.amazonaws.com/warmdata/ioaadsecurity/dma/f7/bb/812~2B94CFE8-9DC3-4E45-9ADF-CAF58B0CDDFE/89513704-8894-4CFC-AC58-9BF7D36B3B59_DM_Splunk_SA_CIM_Web/guidSplunk-2B94CFE8-9DC3-4E45-9ADF-CAF58B0CDDFE/metadata_checksum.
Hello there. I tried to set up perfmon inputs to capture state of my windows 10 test box. Aaaaand. It's not working. And I have no idea how I can debug it further. The inputs seem to be defined... See more...
Hello there. I tried to set up perfmon inputs to capture state of my windows 10 test box. Aaaaand. It's not working. And I have no idea how I can debug it further. The inputs seem to be defined properly (I don't understand why there are two identical definitions for perfmon://CPU and perfmon://Processor but while testing I tried running with just one perfmon input enabled and the result was the same so it's definitely not the result of overlapping inputs). PS C:\Program Files\SplunkUniversalForwarder\bin> .\splunk.exe btool inputs list perfmon://CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 host = dziura index = winmetrics instances = * interval = 300 mode = multikv object = Processor useEnglishOnly = true PS C:\Program Files\SplunkUniversalForwarder\bin> .\splunk.exe btool inputs list perfmon://Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 host = dziura index = winmetrics instances = * interval = 300 mode = multikv object = Process useEnglishOnly = true [perfmon://Processor] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 host = dziura index = winmetrics instances = * interval = 300 mode = multikv object = Processor useEnglishOnly = true The list inputstatus shows: C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe exit status description = exited with code -1 time closed = 2022-11-21T09:27:17+0100 time opened = 2022-11-21T09:27:14+0100 I raised logging level for modularinputs and execprocessor to DEBUG but still it's not helpful: 11-21-2022 09:27:10.491 +0100 DEBUG ModularInputs [6028 MainThread] - Found scheme="perfmon". 11-21-2022 09:27:10.491 +0100 DEBUG ModularInputs [6028 MainThread] - Locating script for scheme="perfmon"... 11-21-2022 09:27:10.491 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.bat". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.cmd". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.py". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.js". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.exe". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.bat". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.cmd". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.py". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.js". 11-21-2022 09:27:10.492 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\perfmon.exe". 11-21-2022 09:27:10.493 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\bin\perfmon.bat". 11-21-2022 09:27:10.493 +0100 DEBUG ModularInputs [6028 MainThread] - Found script ""C:\Program Files\SplunkUniversalForwarder\etc\system\bin\perfmon.cmd"" to handle scheme "perfmon". 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - Introspecting scheme=perfmon: exited: status=done, exit=0 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - XML scheme path "\scheme\script": "script" -> "splunk-perfmon.path" 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - XML endpoint path "\scheme\endpoint\id": "id" -> "win-perfmon" 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - Setting up values from introspection for scheme "perfmon". 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - Locating script for scheme="perfmon"... 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\splunk-perfmon.path". 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\splunk-perfmon.path.exe". 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\splunk-perfmon.path". 11-21-2022 09:27:10.614 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\windows_x86_64\bin\splunk-perfmon.path.exe". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\bin\splunk-perfmon.path". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\bin\splunk-perfmon.path.exe". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\bin\splunk-perfmon.path". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - No regular file="C:\Program Files\SplunkUniversalForwarder\etc\system\bin\splunk-perfmon.path.exe". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - Found script ""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-perfmon.path"" to handle scheme "perfmon". 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - For scheme "perfmon" found script "splunk-perfmon.path" at path ""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-perfmon.path"" 11-21-2022 09:27:10.615 +0100 DEBUG ModularInputs [6028 MainThread] - Setting "id" to "win-perfmon". 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - In configure(), looking at stanza: [script://C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe] -> {host -> dziura, source -> perfmon, sourcetype -> perfmon} 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - Stanza='script://C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe' isModInput=true isIntrospectionInput=false 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - getInterpreterPathFor(): scriptPath=C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe pyVersStr= 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - After normalization script is ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"" 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - stanza=script://C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe interval=18446744073709551.615 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - Creating an ExecedCommand, cmd='"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"', args={"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"}, runViaShell=false 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - ExecProcessorSharedState::addToRunQueue() path='"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"' restartTimerIfNeeded=0 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - adding ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"" to runqueue 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - cmd='"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"' Added to run queue 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - Creating InputStatusHandler for group="modular input commands" key="C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe" 11-21-2022 09:27:11.098 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - Done configuring ExecedCommand: command='"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"' runViaShell=0 tickStarted=0 running=0 state=WAITING_ON_RUNQUEUE interval=18446744073709551.615 11-21-2022 09:27:14.883 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - Running: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe" on PipelineSet 0 11-21-2022 09:27:14.883 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - PipelineSet 0: Created new ExecedCommandPipe for ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"", uniqueId=5 11-21-2022 09:27:16.532 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - PipelineSet 0: Got EOF from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"", uniqueId=5 11-21-2022 09:27:17.048 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - PipelineSet 0: Ran script: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe", took 2.172 seconds to run, 0 bytes read 0 events read, status=done, exit=4294967295 11-21-2022 09:27:17.048 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - PipelineSet 0: Destroying ExecedCommandPipe for ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"" id=5 11-21-2022 09:27:17.048 +0100 DEBUG ExecProcessor [12380 ExecProcessor] - cmd='"C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"' Not added to run queue  The only relevant entry here is the line with "exit=4294967295" which corresponds to the inputstatus message that the process exited with -1. But I still don't know why. I accept that the reason may be completely on the windows side but I would like to be able to diagnose why. Oh, and yes - I did try the lodctr.exe /r - nothing changes. The UF is running as LOCAL SYSTEM so it should not have permission issues. Also I can run perfmon.msc and it's showing the counters properly. Any more debug ideas? I'm stuck.
Hi, I am trying to use splunk sdk javascript and I have searched online for this but came up with no answers. I have followed the tutorial in splunk for setting up the sdk and trying to log in with  ... See more...
Hi, I am trying to use splunk sdk javascript and I have searched online for this but came up with no answers. I have followed the tutorial in splunk for setting up the sdk and trying to log in with  var service = new splunkjs.Service  however this gives me the error of a web instance is required. I found a solution to this is to add var http = new splunkjs.ProxyHttp("/proxy")  but this gives me the error splunkjs.ProxyHttp is not a constructor why is this happening?
Hi all,  I am attempting to convert data extracted as a field containing combination of hex and ascii data. Was wondering if it is possible to convert the hex data into ascii without affecting the ... See more...
Hi all,  I am attempting to convert data extracted as a field containing combination of hex and ascii data. Was wondering if it is possible to convert the hex data into ascii without affecting the ascii data?   Thanks in advance 
I have installed the available SignalFX plug-in on my Splunk Enterprise instance on-premise server. I tried searching for tutorials/documentations on how to setup SignalFX on Splunk Enterprise on-pre... See more...
I have installed the available SignalFX plug-in on my Splunk Enterprise instance on-premise server. I tried searching for tutorials/documentations on how to setup SignalFX on Splunk Enterprise on-premise server.  But moving forward I got to know that to collect data, the Splunk Distribution of OpenTelemetry Collector should be used, and the steps for it mentions Splunk Observability Cloud. Please help me understand if SignalFX be used for Splunk Enterprise version, if yes then how. 
I have two saved search reports with below outputs. saved search 1 (totalCountByClient) giving client_name, totalCount as output saved search 2 (monitoringCountByClient) giving client_name, monitor... See more...
I have two saved search reports with below outputs. saved search 1 (totalCountByClient) giving client_name, totalCount as output saved search 2 (monitoringCountByClient) giving client_name, monitoringCount as output Want to show results of (monitoringCount/totalCount*100) by client_name as a timechart. Any help would be appreciated.
I need to monitor privileged access employees who can transfer files form internal to external network. Prvileged access employees include local admins, unlimited internet access users, employees who... See more...
I need to monitor privileged access employees who can transfer files form internal to external network. Prvileged access employees include local admins, unlimited internet access users, employees who can use usb flash drive and send emails to external I need it it from DLP , please guys, help me how to do it with splunk.
Hi, Been banging my head on this brick wall for a while so reaching out for some of expertise. Seems pretty straightforward and regex101 says my expression should work, but I am not getting any dat... See more...
Hi, Been banging my head on this brick wall for a while so reaching out for some of expertise. Seems pretty straightforward and regex101 says my expression should work, but I am not getting any data returned in the new field. Original data  is: 18 dB, 16 dB, 12 dB, 12 dB, 12 dB, 13 dB, 4 dB, 8 dB, 9 dB, 9 dB 9 dB, 9 dB, 9 dB, 9 dB 9 dB 9 dB, 9 dB, 9 dB, 9 dB, 9 dB 7 dB, 9 dB I'm trying to remove the space and the text dB after any number. So the results for the 4th event would read as 9, 9, 9, 9, 9 and the 5th event would be 7, 9 My search returns  the events, but no values for the new field: |rex field=Value "\ dB(?<MicGainText>)" |table Value MicGainText If anyone could assist, it would be greatly appreciated. Thanks in advance John
Hi all,    Here is the use case I'm dealing with. We have a large virtual environment in which a lot of teams like to just clone one VM to another, meaning that the forwarder hostname and guid gets... See more...
Hi all,    Here is the use case I'm dealing with. We have a large virtual environment in which a lot of teams like to just clone one VM to another, meaning that the forwarder hostname and guid gets cloned, which messes with our reporting.      I am trying to write a simple script that does the following: 1. Detects if a UF's hostname is correct or not 2. Runs a simple scripted input to clear out any cloned configs 3. Restarts the forwarder so that the new configs are picked up.      #3 is causing me trouble. If I try to put a "splunk restart" command in the main body of the script, then Splunk will stop, kill the scripted input, and never restart. I've also tried creating a "wrapper" script that invokes a separate script to do the restart, but with no success - Splunk will stop but not start back up. Is there a better way to do this?      All hosts are AWS Linux. 
When trying to help Working with SHA1 value., I encountered some fundamental SPL limitation with large numbers starting around 10000000000000000 (10 trillion).  But Splunk gives no error, just behave... See more...
When trying to help Working with SHA1 value., I encountered some fundamental SPL limitation with large numbers starting around 10000000000000000 (10 trillion).  But Splunk gives no error, just behaves erratically.  Take this example,   | makeresults | eval i = mvrange(1, 20) | eval a = mvmap(i, 1000000000000000 + i) | eval b = mvmap(i, 10000000000000000 - i) | eval c = mvmap(i, 10000000000000000 + i) | eval d = mvmap(i, 100000000000000000 + i) | eval e = mvmap(i, 1000000000000000000 + i)   a b c d e i 1000000000000001 1000000000000002 1000000000000003 1000000000000004 1000000000000005 1000000000000006 1000000000000007 1000000000000008 1000000000000009 1000000000000010 1000000000000011 1000000000000012 1000000000000013 1000000000000014 1000000000000015 1000000000000016 1000000000000017 1000000000000018 1000000000000019 10000000000000000 9999999999999998 9999999999999996 9999999999999996 9999999999999996 9999999999999994 9999999999999992 9999999999999992 9999999999999992 9999999999999990 9999999999999988 9999999999999988 9999999999999988 9999999999999986 9999999999999984 9999999999999984 9999999999999984 9999999999999982 9999999999999980 10000000000000000 10000000000000002 10000000000000004 10000000000000004 10000000000000004 10000000000000006 10000000000000008 10000000000000008 10000000000000008 10000000000000010 10000000000000012 10000000000000012 10000000000000012 10000000000000014 10000000000000016 10000000000000016 10000000000000016 10000000000000018 10000000000000020 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 In other words, as the number approaches 10^13, SPL can no longer maintain sequence.  Certain numbers (10,000,000,000,000,000, 100,000,000,000,000,000, 1,000,000,000,000,000,000, etc.) are still being presented, but not many numbers in between. Is there documentation about this?  Is this configurable in limits.conf? Not having errors makes this a dangerous condition.  So, I definitely consider this a bug.  In modern computing, 10 trillion is not terribly large; even shell can handle it.   $ for i in `seq 20`; do expr 10000000000000000 + $i; done 10000000000000001 10000000000000002 10000000000000003 10000000000000004 10000000000000005 10000000000000006 10000000000000007 10000000000000008 10000000000000009 10000000000000010 10000000000000011 10000000000000012 10000000000000013 10000000000000014 10000000000000015 10000000000000016 10000000000000017 10000000000000018 10000000000000019 10000000000000020    (You can try it in any shell.)
Hi I want to look up, DLP agent not installed on a system, index = mcafee_epo sourcetype = mcafee:epo:syslog source = mcafee:epo:syslog how to do I search dlp agents not installed on a sys... See more...
Hi I want to look up, DLP agent not installed on a system, index = mcafee_epo sourcetype = mcafee:epo:syslog source = mcafee:epo:syslog how to do I search dlp agents not installed on a system.  
I Have a log like this, how do I Parse it into fields??  Is there a way to use Splunk to parse this and extract one value? If so, how? Thank you in advance. Regards, Imam