All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. There is no possibility @ITWhisperer's search should give this half of replacement.  But fi... See more...
only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. There is no possibility @ITWhisperer's search should give this half of replacement.  But first, your search is very inefficient: The third line starting with search should be accomplished in the first line so fewer events are computed.  Secondly, using regex on rigidly formatted data (CSV) is a waste and prone to errors.  This is what I suggest, using exactly what @ITWhisperer proposed.   index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" ("\cl3\" OR "\cl1\") | eval filename = split(_raw, ",") | eval filesize = mvindex(filename, 1), filelocation = mvindex(filename, 2) | eval filename = mvindex(filename, 0) | eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")   Also important: Play with the following emulation and compare with your real data:   | makeresults | fields - _* | eval data=split("012624.1230,13253.10546875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 012624.1230,2236.3291015625,E:\totalview\ftp\acd\cl3\backup\012624.1230 012624.1200,13338.828125,E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 012624.1200,2172.1640625,E:\totalview\ftp\acd\cl3\backup\012624.1200 012624.1130,13292.32421875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 012624.1130,2231.9658203125,E:\totalview\ftp\acd\cl3\backup\012624.1130 012624.1100,13438.65234375,E:\totalview\ftp\acd\cl1\backup_modified\012624.1100", " ") | mvexpand data | rename data AS _raw | search (\\cl1\\ OR \\cl3\\) ``` the above emulates index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" ("\cl3\" OR "\cl1\") ``` | eval filename = split(_raw, ",") | eval filesize = mvindex(filename, 1), filelocation = mvindex(filename, 2) | eval filename = mvindex(filename, 0) | eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")   The output is _raw filelocation filename filesize 012624.1230,13253.10546875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 ACD55 012624.1230 13253.10546875 012624.1230,2236.3291015625,E:\totalview\ftp\acd\cl3\backup\012624.1230 ACD85 012624.1230 2236.3291015625 012624.1200,13338.828125,E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 ACD55 012624.1200 13338.828125 012624.1200,2172.1640625,E:\totalview\ftp\acd\cl3\backup\012624.1200 ACD85 012624.1200 2172.1640625 012624.1130,13292.32421875,E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 ACD55 012624.1130 13292.32421875 012624.1130,2231.9658203125,E:\totalview\ftp\acd\cl3\backup\012624.1130 ACD85 012624.1130 2231.9658203125 012624.1100,13438.65234375,E:\totalview\ftp\acd\cl1\backup_modified\012624.1100 ACD55 012624.1100 13438.65234375 As you see, there is no such "partial replacement".  You will need to illustrate and explain any discrepancy between real data and this mock data if you don't get the same results.
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  follow... See more...
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  following installation steps: •  I stopped splunk service •  I extracted spl ITSI package in according to documentation •  I ran services but splunkd component wasn’t able to activate appserver and so web server  Digging either into web_service.log  or mainly into splunkd.log I‘ve found these entries 01-26-2024 17:26:50.164 +0000 ERROR UiPythonFallback [115369 WebuiStartup] - Couldn't start any appserver processes, UI will probably no t function correctly! 01-26-2024 17:26:50.164 +0000 ERROR UiHttpListener [115369 WebuiStartup] - No app server is running, stop initializing http server. So I proceeded stopping services , uninstalling app components folders  and its indexes storage repositories (  in according to docs) ; then I ran services again and all components including webservice worked fine . We ‘ve deployed Splunk enterprise on ubuntu server ( relative package is splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb)  And download ITSI app from its splunkbase link https://splunkbase.splunk.com/app/1841 Could you address with some hints about it ? we 'd try to verify some ITSI features as soon as possible   Thanks in advance and regards   Luigi
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need... See more...
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need to get an alert. So kindly help with the query.  
Ignore the deleted answer.  When there is a missed space between "Conf" and "-console", I couldn't see the problem you were trying to fix. (It's really important to illustrate data accurately when as... See more...
Ignore the deleted answer.  When there is a missed space between "Conf" and "-console", I couldn't see the problem you were trying to fix. (It's really important to illustrate data accurately when asking data analytics questions.) So, the regex doesn't handle inputs like "-config Conf -console -ntpsync no --check_core no".  In fact, the regex is already too heavy handed.  Instead of adding more regex tax, the following method semantically expresses command line syntax, and does not require composition and decomposition. (Command line allows multiple spaces and such.  But nothing another trim couldn't fix.)   | eval flag = mvindex(split(Aptlauncher_cmd, " -"), 1, -1) | eval flag = trim(flag, "-") | mvexpand flag | eval flag = split(flag, " ") | eval value = mvindex(flag, 1), flag = mvindex(flag, 0) | eval value = if(isnull(flag), null(), coalesce(value, "true"))   The samples (including "-config Conf -console -ntpsync no --check_core no") will give Aptlauncher_cmd flag value launch test -config Conf -console -ntpsync no --check_core no config Conf launch test -config Conf -console -ntpsync no --check_core no console true launch test -config Conf -console -ntpsync no --check_core no ntpsync no launch test -config Conf -console -ntpsync no --check_core no check_core no launch test -config basic_config.cfg -system test_system1 -retry 3 config basic_config.cfg launch test -config basic_config.cfg -system test_system1 -retry 3 system test_system1 launch test -config basic_config.cfg -system test_system1 -retry 3 retry 3 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 con-fig advanced-config_v2.cfg launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 sys_tem test_system_2 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 re-try 4 launch update -email user@example.com -domain test.domain.com -port 8080 email user@example.com launch update -email user@example.com -domain test.domain.com -port 8080 domain test.domain.com launch update -email user@example.com -domain test.domain.com -port 8080 port 8080 launch deploy -verbose -dry_run -force verbose true launch deploy -verbose -dry_run -force dry_run true launch deploy -verbose -dry_run -force force true launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" task "Deploy launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" at "2023-07-21 launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" notify "admin@example.com" launch clean -@cleanup -remove_all -v2.5 @cleanup true launch clean -@cleanup -remove_all -v2.5 remove_all true launch clean -@cleanup -remove_all -v2.5 v2.5 true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min config@version2 true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min custom-env DEV-TEST launch start -config@version2 --custom-env DEV-TEST --update-rate@5min update-rate@5min true launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent env DEV launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent build-version 1.0.0 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent @retry-limit 5 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent log-level debug launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent silent true launch execute -file script.sh -next-gen --flag -another-flag value file script.sh launch execute -file script.sh -next-gen --flag -another-flag value next-gen true launch execute -file script.sh -next-gen --flag -another-flag value flag true launch execute -file script.sh -next-gen --flag -another-flag value another-flag value launch execute process_without_any_flags     launch special -@@ -##value special_value --$$$ 100 @@ true launch special -@@ -##value special_value --$$$ 100 ##value special_value launch special -@@ -##value special_value --$$$ 100 $$$ 100 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 add 5 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 subtract 3 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 multiply@2.5 true launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 divide@2 true
Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also t... See more...
Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also tried connecting to a new index=database on my SH and HF and restarting and did not work)  
Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this err... See more...
Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this error persists. Even though I make sure that: analytics.accountAccessKey is the same as ad.accountmanager.key.eum is the same as appdynamics.es.eum.key from the admin console.
Yes Thanks Ryan, now we have another problem with the elasticsearch, it gets crashes on memory usage. head hs_err_pid512833.log # # There is insufficient memory for the Java Runtime Environment t... See more...
Yes Thanks Ryan, now we have another problem with the elasticsearch, it gets crashes on memory usage. head hs_err_pid512833.log # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory. # Possible reasons: # The system is out of physical RAM or swap space # The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap # Possible solutions: # Reduce memory load on the system # Increase physical memory or swap space # Check if swap backing store is full so what should be our memory GB to run the ES [Internal AppD Support ticket #386641]
You should use the timestamp after the host name like this:   TIME_PREFIX = [^,]+, MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y/%m/%d %H:%M:%S   This will pick this part, in bold: Nov 25 15:00:... See more...
You should use the timestamp after the host name like this:   TIME_PREFIX = [^,]+, MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y/%m/%d %H:%M:%S   This will pick this part, in bold: Nov 25 15:00:37 192.168.96.13 Nov 25 15:00:37 PA-Primary.abc.abc.com 1,2014/11/25 15:00:36,0011C100738,TRAFFIC.....  
Hello VatsalJagani Thanks to you, it worked out well.
Hi @anissabnk, let me understan: is the issue that the columns aren't in the correct order? use date_year and date_month in the stats commands, instead only date_month, or add this command before u... See more...
Hi @anissabnk, let me understan: is the issue that the columns aren't in the correct order? use date_year and date_month in the stats commands, instead only date_month, or add this command before using stats: | eval month=date_year."-".date_month and your stats columns will be ordered. Ciao. Giuseppe
Hi Roberto, Can you please share how you have added the ntlm libraries to the Private Agent ?
I am facing the same error with Log Extension: Error MonitorExecutorServiceModule-Log Monitor - Queue Capacity reached!! Rejecting runnable tasks. Error StreamReaderRunnable - Error reading stream ... See more...
I am facing the same error with Log Extension: Error MonitorExecutorServiceModule-Log Monitor - Queue Capacity reached!! Rejecting runnable tasks. Error StreamReaderRunnable - Error reading stream Error: Error response from Controller: Connection back off in effect. Any advise to find the root cause. Post restart queue data comes until queue fills back again. Max Heap Memory: 1 GB, Max Metric Limit: 7000. Number of threads: 80 Any Advise to resolve these errors?
Try running btool when this "non-impacting" error occurs.  It doesn't run.
It is a breaking issue as I cannot run btool on my forwarders that are throwing this message. 
App TA_MongoDB_Atlas (6238) pages not loading after migration for 9,1.2.
I've experienced the same situation.  Batch Reader was reading another large zip file before the one waiting with the message, "updated less than 10000ms ago, will not read it until it stops changing... See more...
I've experienced the same situation.  Batch Reader was reading another large zip file before the one waiting with the message, "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." Just wait till Batch Reader finishes reading the other large files. Then, the one waiting should be read.
Ah, I believe you'll want   REGEX = comm="elasticsearch\[n"     Also just gonna say - I've done this before, I promise it can be made to work!  Don't lose hope even if it's a bit fiddly to get i... See more...
Ah, I believe you'll want   REGEX = comm="elasticsearch\[n"     Also just gonna say - I've done this before, I promise it can be made to work!  Don't lose hope even if it's a bit fiddly to get it set up sometimes!  
I restarted but it doesn't take the changes. It Is the data `comm="elasticsearch[n"` correct with the special character `[`
It was my mistake.  I copied | rename Source_Network_Address_Port as snat from your original search for index _ad.  snat requires coalesce as well. (index=_ad (EventCode=4625 OR (EventCode=4771 Fail... See more...
It was my mistake.  I copied | rename Source_Network_Address_Port as snat from your original search for index _ad.  snat requires coalesce as well. (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20)) OR (index=_network snat IN (10.10.10.10*,20.20.20.20*)) ``` get relevant data ``` | bucket span=1m _time ``` common time buckets ``` | eval Source_Network_Address1 = case(EventCode==4771, trim(Client_Address, "::ffff:")) | eval SourceIP = Source_Network_Address | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval Source_Network_Address_Port = SourceIP+":"+Source_Port | eval snat = coalesce(snat, Source_Network_Address_Port) | rex field=ComputerName "(?<DCName>^([^.]+))" ``` the above applies to index _ad ``` | rex field=client "^(?<client_ip>.*?)\:(?<client_port>.*)" ``` this applies to index _network ``` | eventstats values(client_ip) as client_ip by _time snat ``` assuming index _ad search returns fewer events ``` | stats count by _time snat Account_Name EventCode DCName client_ip  
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried cha... See more...
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried changing the INDEXED_EXTRACTIONS and KV_MODE settings on the HV as explained by many others without success. In Cloud there wasn`t a source type for this data feed, so we`ve created one manually and set INDEXED_EXTRACTIONS = none and KV_MODE = json however this made no change.  I`ve also added a stanza in local.meta on the HF as suggested by others as follows: export = system. Here`s a snap of the sourcetype stanza on the HF. As you can see INDEXED_EXTRACTIONS  and KV_MODE  are both set to false, but I`ve tried pretty much every combination possible - which suggests to me the issue is in the Cloud.   ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = false BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = CURRENT DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = INDEXED_EXTRACTIONS = none KV_MODE = none LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER = ([\r\n]+) LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = NO_BINARY_CHECK = true SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = 0 TIME_FORMAT = TRANSFORMS = TRUNCATE = 10000 category = Structured detect_trailing_nulls = false disabled = false maxDist = 100 priority = pulldown_type = 1 sourcetype = termFrequencyWeightedDist = false    Any help would be greatly appreciated.