All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this err... See more...
Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this error persists. Even though I make sure that: analytics.accountAccessKey is the same as ad.accountmanager.key.eum is the same as appdynamics.es.eum.key from the admin console.
Yes Thanks Ryan, now we have another problem with the elasticsearch, it gets crashes on memory usage. head hs_err_pid512833.log # # There is insufficient memory for the Java Runtime Environment t... See more...
Yes Thanks Ryan, now we have another problem with the elasticsearch, it gets crashes on memory usage. head hs_err_pid512833.log # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory. # Possible reasons: # The system is out of physical RAM or swap space # The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap # Possible solutions: # Reduce memory load on the system # Increase physical memory or swap space # Check if swap backing store is full so what should be our memory GB to run the ES [Internal AppD Support ticket #386641]
You should use the timestamp after the host name like this:   TIME_PREFIX = [^,]+, MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y/%m/%d %H:%M:%S   This will pick this part, in bold: Nov 25 15:00:... See more...
You should use the timestamp after the host name like this:   TIME_PREFIX = [^,]+, MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y/%m/%d %H:%M:%S   This will pick this part, in bold: Nov 25 15:00:37 192.168.96.13 Nov 25 15:00:37 PA-Primary.abc.abc.com 1,2014/11/25 15:00:36,0011C100738,TRAFFIC.....  
Hello VatsalJagani Thanks to you, it worked out well.
Hi @anissabnk, let me understan: is the issue that the columns aren't in the correct order? use date_year and date_month in the stats commands, instead only date_month, or add this command before u... See more...
Hi @anissabnk, let me understan: is the issue that the columns aren't in the correct order? use date_year and date_month in the stats commands, instead only date_month, or add this command before using stats: | eval month=date_year."-".date_month and your stats columns will be ordered. Ciao. Giuseppe
Hi Roberto, Can you please share how you have added the ntlm libraries to the Private Agent ?
I am facing the same error with Log Extension: Error MonitorExecutorServiceModule-Log Monitor - Queue Capacity reached!! Rejecting runnable tasks. Error StreamReaderRunnable - Error reading stream ... See more...
I am facing the same error with Log Extension: Error MonitorExecutorServiceModule-Log Monitor - Queue Capacity reached!! Rejecting runnable tasks. Error StreamReaderRunnable - Error reading stream Error: Error response from Controller: Connection back off in effect. Any advise to find the root cause. Post restart queue data comes until queue fills back again. Max Heap Memory: 1 GB, Max Metric Limit: 7000. Number of threads: 80 Any Advise to resolve these errors?
Try running btool when this "non-impacting" error occurs.  It doesn't run.
It is a breaking issue as I cannot run btool on my forwarders that are throwing this message. 
App TA_MongoDB_Atlas (6238) pages not loading after migration for 9,1.2.
I've experienced the same situation.  Batch Reader was reading another large zip file before the one waiting with the message, "updated less than 10000ms ago, will not read it until it stops changing... See more...
I've experienced the same situation.  Batch Reader was reading another large zip file before the one waiting with the message, "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." Just wait till Batch Reader finishes reading the other large files. Then, the one waiting should be read.
Ah, I believe you'll want   REGEX = comm="elasticsearch\[n"     Also just gonna say - I've done this before, I promise it can be made to work!  Don't lose hope even if it's a bit fiddly to get i... See more...
Ah, I believe you'll want   REGEX = comm="elasticsearch\[n"     Also just gonna say - I've done this before, I promise it can be made to work!  Don't lose hope even if it's a bit fiddly to get it set up sometimes!  
I restarted but it doesn't take the changes. It Is the data `comm="elasticsearch[n"` correct with the special character `[`
It was my mistake.  I copied | rename Source_Network_Address_Port as snat from your original search for index _ad.  snat requires coalesce as well. (index=_ad (EventCode=4625 OR (EventCode=4771 Fail... See more...
It was my mistake.  I copied | rename Source_Network_Address_Port as snat from your original search for index _ad.  snat requires coalesce as well. (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20)) OR (index=_network snat IN (10.10.10.10*,20.20.20.20*)) ``` get relevant data ``` | bucket span=1m _time ``` common time buckets ``` | eval Source_Network_Address1 = case(EventCode==4771, trim(Client_Address, "::ffff:")) | eval SourceIP = Source_Network_Address | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval Source_Network_Address_Port = SourceIP+":"+Source_Port | eval snat = coalesce(snat, Source_Network_Address_Port) | rex field=ComputerName "(?<DCName>^([^.]+))" ``` the above applies to index _ad ``` | rex field=client "^(?<client_ip>.*?)\:(?<client_port>.*)" ``` this applies to index _network ``` | eventstats values(client_ip) as client_ip by _time snat ``` assuming index _ad search returns fewer events ``` | stats count by _time snat Account_Name EventCode DCName client_ip  
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried cha... See more...
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried changing the INDEXED_EXTRACTIONS and KV_MODE settings on the HV as explained by many others without success. In Cloud there wasn`t a source type for this data feed, so we`ve created one manually and set INDEXED_EXTRACTIONS = none and KV_MODE = json however this made no change.  I`ve also added a stanza in local.meta on the HF as suggested by others as follows: export = system. Here`s a snap of the sourcetype stanza on the HF. As you can see INDEXED_EXTRACTIONS  and KV_MODE  are both set to false, but I`ve tried pretty much every combination possible - which suggests to me the issue is in the Cloud.   ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = false BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = CURRENT DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = INDEXED_EXTRACTIONS = none KV_MODE = none LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER = ([\r\n]+) LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = NO_BINARY_CHECK = true SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = 0 TIME_FORMAT = TRANSFORMS = TRUNCATE = 10000 category = Structured detect_trailing_nulls = false disabled = false maxDist = 100 priority = pulldown_type = 1 sourcetype = termFrequencyWeightedDist = false    Any help would be greatly appreciated.
I'm not a big fan of (mis)using actual data as JSON key.  If you have any influence over your developers, ask them to design a proper data structure. To rescue data in its current state, Splunk will... See more...
I'm not a big fan of (mis)using actual data as JSON key.  If you have any influence over your developers, ask them to design a proper data structure. To rescue data in its current state, Splunk will need to construct a proper structure.  For this purpose,  json_set that Splunk added in 8.0 comes handy.  Like this: (Because you didn't enclose sample data, I just assume that lint-info is a top key in your raw data.)   | spath path=lint-info.-Wunused-but-set-variable{} | spath path=lint-info.-Wunused-genvar{} | mvexpand lint-info.-Wunused-but-set-variable{} | mvexpand lint-info.-Wunused-genvar{} | rename lint-info.*{} AS * | foreach -Wunused-* [eval -Wunused = mvappend('-Wunused', json_set('<<FIELD>>', "type", "<<FIELD>>"))] | mvexpand -Wunused | spath input=-Wunused | fields - -Wunused-* _*   This is the result from your sample: -Wunused location.column location.filename location.line source type warning {"location":{"column":58,"filename":"ab1","line":237},"source":"logic [MSGG_RX_CNT-1:0][MSGG_RX_CNT_MAXWIDTH+2:0] msgg_max_unrsrvd_temp; // temp value including carry out","warning":"variable 'msgg_max_unrsrvd_temp' is assigned but its value is never used","type":"-Wunused-but-set-variable"} 58 ab1 237 logic [MSGG_RX_CNT-1:0][MSGG_RX_CNT_MAXWIDTH+2:0] msgg_max_unrsrvd_temp; // temp value including carry out -Wunused-but-set-variable variable 'msgg_max_unrsrvd_temp' is assigned but its value is never used {"location":{"column":11,"filename":"ab3","line":328},"source":"genvar nn,oo;","warning":"unused genvar 'oo'","type":"-Wunused-genvar"} 11 ab3 328 genvar nn,oo; -Wunused-genvar unused genvar 'oo' {"location":{"column":58,"filename":"ab2","line":254},"source":"logic msgg_avail_cnt_err; // Available Counter update error detected","warning":"variable 'msgg_avail_cnt_err' is assigned but its value is never used","type":"-Wunused-but-set-variable"} 58 ab2 254 logic msgg_avail_cnt_err; // Available Counter update error detected -Wunused-but-set-variable variable 'msgg_avail_cnt_err' is assigned but its value is never used {"location":{"column":11,"filename":"ab3","line":328},"source":"genvar nn,oo;","warning":"unused genvar 'oo'","type":"-Wunused-genvar"} 11 ab3 328 genvar nn,oo; -Wunused-genvar unused genvar 'oo' Here is data emulation that you can play with and compare with real data   | makeresults | eval _raw = "{\"lint-info\": { \"-Wunused-but-set-variable\": [ { \"location\": { \"column\": 58, \"filename\": \"ab1\", \"line\": 237 }, \"source\": \"logic [MSGG_RX_CNT-1:0][MSGG_RX_CNT_MAXWIDTH+2:0] msgg_max_unrsrvd_temp; // temp value including carry out\", \"warning\": \"variable 'msgg_max_unrsrvd_temp' is assigned but its value is never used\" }, { \"location\": { \"column\": 58, \"filename\": \"ab2\", \"line\": 254 }, \"source\": \"logic msgg_avail_cnt_err; // Available Counter update error detected\", \"warning\": \"variable 'msgg_avail_cnt_err' is assigned but its value is never used\" } ], \"-Wunused-genvar\": [ { \"location\": { \"column\": 11, \"filename\": \"ab3\", \"line\": 328 }, \"source\": \"genvar nn,oo;\", \"warning\": \"unused genvar 'oo'\" } ], \"total\": 3, \"types\": [ \"-Wunused-but-set-variable\", \"-Wunused-genvar\" ] }}" ``` data emulation above ```   Hope this helps.
Experiencing the same issue, can you advise where did you define this setting, please ? We`re using Splunk Cloud, so not sure how to access the local.meta file ?
i have my json data where sometimes we are unable to see the status: closed field in some of the events as i want to write a regex to bring this in event  state: { [-]      alert_id: orca-8452634  ... See more...
i have my json data where sometimes we are unable to see the status: closed field in some of the events as i want to write a regex to bring this in event  state: { [-]      alert_id: orca-8452634      closed_reason: null      closed_time: null      created_at: 2023-07-06T11:41:18+00:00      high_since: null      in_verification: null      is_new_score: null      last_seen: 2024-02-04T11:38:11+00:00      last_updated: 2024-02-05T13:45:45+00:00      low_since: 2024-02-05T13:45:45+00:00      orca_score: 7      risk_level: high      rule_source: null      score: 2      severity: imminent compromise      status: closed      status_time: 2024-02-05T13:45:45+00:00      verification_status: null   Can anyone help us to close this 
So I have my application set up on my controller and I'm able to see every workload I'm sending to that Controller application. I have quite a lot of workloads that I'm running and every time I run ... See more...
So I have my application set up on my controller and I'm able to see every workload I'm sending to that Controller application. I have quite a lot of workloads that I'm running and every time I run them I have to get on that application and look at the Application flow to validate that my workloads are showing up as expected and that call counts for each node are accurate (regardless of error or success). I was looking for a method to automate the task of having to open the Application every time to validate the flow map and call counts, it could be an API that would get the Application flow map metrics. Is there such an API that will help me with this? I was looking at the AppDynamics documentation and the discussions but didn't get anything related to this.   In short, I want to get the "application flow map" data (that can be seen on the Application dashboard) from AppDynamics APIs. Thanks in Advance, Surafel
Sourcetypes should match between inputs.conf and props.conf, except when props are specified for hosts or sources rather than sourcetypes.  The indexes.conf file does not use sourcetypes. The regex ... See more...
Sourcetypes should match between inputs.conf and props.conf, except when props are specified for hosts or sources rather than sourcetypes.  The indexes.conf file does not use sourcetypes. The regex needs an escape character before the /.  Also, the expression has two capture groups so make sure $1 is the group you expect.  Or use a single group. Hostname>(?<hostname>.*?)<\/Hostname