All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

 Trying to use the opentelemetry collector opentel-contrib to collect and push metrics into appdynamics. And I get the 403 forbidden in the debug log when calling  the URL https://pdx-sls-agent-api.s... See more...
 Trying to use the opentelemetry collector opentel-contrib to collect and push metrics into appdynamics. And I get the 403 forbidden in the debug log when calling  the URL https://pdx-sls-agent-api.saas.appdynamics.com/v1/metrics  Checked the following so far:   service.namespace  = the name of the application  service.name = name of the Tier I've created in in AppDynamics In the confing.yml I have the following set up exporters:   otlphttp:     endpoint: "https://pdx-sls-agent-api.saas.appdynamics.com"     headers: {"x-api-key": "<key_copied_from_the_otel_page_in_appdynamics>"}   logging:     loglevel: debug  Any tips on where to go next?  Is there any documentation on which endpoint exists and does the type of the Tier affect anything? br Kjell
Hi All, I am trying to add severity column to output of first command, could you please let me know how to do it. Query I have created is : index=abc source=xyz | table _time ID STATUS ERROR_Na... See more...
Hi All, I am trying to add severity column to output of first command, could you please let me know how to do it. Query I have created is : index=abc source=xyz | table _time ID STATUS ERROR_Name | search ERROR_Name IN ("EndDate must be after StartDate""The following is required: PersonName" ....many others) | join type=inner ID[search index=abc source=xyz STATUS IN (FATAL,SUCCESS) | table _time ID STATUS | stats latest(STATUS) as STATUS by ID | search STATUS IN (FATAL) | fields ID] | stats latest(STATUS) as STATUS by ID ERROR_Name | search STATUS IN (FATAL) | top 50 ERROR_Name | appendcols [| eval severity = case(ERROR_Name=="EndDate must be after StartDate", "One", ERROR_Name=="The following is required: PersonName", "two")]
Hello, I'm new here, tried to find the answer for my problem by failed. I'm looking for a method to extract values from 2 different events. These events have some common fileds but I'm not intereste... See more...
Hello, I'm new here, tried to find the answer for my problem by failed. I'm looking for a method to extract values from 2 different events. These events have some common fileds but I'm not interested in them being part of output. My events have following fields (there are more, but these I would like to operate on): EventID=10001 time=_time user=mike vlan=mikevlan EventID=10002 time=_time user=mike L2ipaddress=1.2.3.4 What I'm looking at as a result is a table with a combined results from vlan and L2ipaddress columns for which user and time matches then I need to have a list of all vlans grouped by L2ipaddress 1.2.3.4|mikevlan,tomvlan,anavlan 1.2.3.5|brianvlan,evevlan etc Any ideas?
My dashboard panel won't work, even after changing input values , it will always say 'waiting for input'. I am unable to figure out if it is I am passing the tokens incorrectly or there is some other... See more...
My dashboard panel won't work, even after changing input values , it will always say 'waiting for input'. I am unable to figure out if it is I am passing the tokens incorrectly or there is some other issue. Could use some help.
Hi Splunkers. I'm trying to extract fields from Windows DNS debug logs but running into extraction issues for some events. Most events the fields extract o.k. I'm finding for some events, the re... See more...
Hi Splunkers. I'm trying to extract fields from Windows DNS debug logs but running into extraction issues for some events. Most events the fields extract o.k. I'm finding for some events, the regex is returning more than it should in the field. i.e. returns the field plus the remaining text in the raw event. Works for most events extracting the domain correctly as, for example. (3)web(4)site(5)again(3)net(0) but when it fails, it extracts the questionname filed as (3)web(4)site(5)again(3)net(0) plus the remaining text to the end of the event. Regex in use is straight out of the Splunk TA for Windows from props.conf: ] (?<questiontype>\w+)\s+(?<questionname>.*) Sample data: ------- 28/10/2022 12:29:22 PM 07AC PACKET 1234523DDF690A11 UDP Snd 10.20.222.111 54c5 R Q [8081 DR NOERROR] A (3)web(4)site(5)again(3)net(0) UDP response info at 1234523DDF690A11 Socket = 736 Remote addr 10.20.222.111, port 62754 Time Query=20130697, Queued=0, Expire=0 Buf length = 0x0200 (512) Msg length = 0x0054 (84) Message: XID 0x54c5 Flags 0x8180 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 2 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: [snipped for brevity] -------- If I use the regex from the props.conf above in a REX command via SPL, the field is extracted correctly. The same regex also works fine in regex101 etc. (with the same event causes the issue used as test data) Can anyone explain why the regex works differently when used in props.conf than in direct SPL, and where I should be looking? As mentioned above, issue only occurs for some events.  Note that DNS events are both single line and multi-line, with only some multi-line having the issue.   Thanks in advance.
Hi all. I'm new to Splunk Cloud so I installed JIRA Cloud Add-on for Splunk Cloud by following this steps. But, when I do search based on the index that I configured on step 4, I found no results fou... See more...
Hi all. I'm new to Splunk Cloud so I installed JIRA Cloud Add-on for Splunk Cloud by following this steps. But, when I do search based on the index that I configured on step 4, I found no results found. And when I try to do this, it shows 'Unknown search command 'jira''. Did I miss something here? Please kindly help. Thank you so much.  
Hello Team, I want to implement pool enforcement policies in Splunk. Please suggest how can I proceed, if any available documents have , then share with me. Implement pool enforcement 1. high_p... See more...
Hello Team, I want to implement pool enforcement policies in Splunk. Please suggest how can I proceed, if any available documents have , then share with me. Implement pool enforcement 1. high_perf pool 2. limited_perf pool 3. standard_perf pool
We re-routed data from Splunk SaaS cloud to On-perm but we see event mismatch between these two instances, if I route the date to Splunk cloud all the sudden the event count increases but when I re-r... See more...
We re-routed data from Splunk SaaS cloud to On-perm but we see event mismatch between these two instances, if I route the date to Splunk cloud all the sudden the event count increases but when I re-route the same data source to on-perm drastically the event count comes down for the same time period but don't see any error in the FW.  Fw-->Splunk SaaS more count. Fw-->Splunk On-Perm less count.  Please find the screenshot and let me know what would be the issue and troubleshoot to fix this count mismatch. Thanks.   
Hello! I am making a time chart for how many apples have been picked EACH day. Yet, the data field representing the number of picked apples is a cumulative sum over the month. ex. Yesterday 5 app... See more...
Hello! I am making a time chart for how many apples have been picked EACH day. Yet, the data field representing the number of picked apples is a cumulative sum over the month. ex. Yesterday 5 apples were picked, today 3...instead of today's pick count=3 it is represented as 8 (5+3). Given this, how can I make the time chart values subtract the number of apples previously picked from the current number of apples picked so I can get the number of apples picked that day. Code: Index...... |bin span=1d _time |dedup _time Apple_type |stats sum(pick_count) as Picked by _time Apple_type |timechart values(Picked) by Apple_type span=1d |fillnull value=0 Results:  But I want!!! Please help ! Thank you.  
Hey community, Can someone help me out with a rex related question! Many many thanks! I am trying to rex the V1 out of a sample string and I have tried  catalogVersion\\":\\"(?P<catalogVersion>[^... See more...
Hey community, Can someone help me out with a rex related question! Many many thanks! I am trying to rex the V1 out of a sample string and I have tried  catalogVersion\\":\\"(?P<catalogVersion>[^ ]+)\\",   In regex101, it is working, However, I am getting a Unbalanced quotes error in Splunk. sample string \"transferDisconnectReasons\":null,\"catalogVersion\":\"V1\",\"accountCustomerDetails\"     Cheers!
Hello y'all! I'm trying to use the Single Value object, and build a search which count the number of the records and shows up.. but, for some reason it's not bring the right number.. Here is my... See more...
Hello y'all! I'm trying to use the Single Value object, and build a search which count the number of the records and shows up.. but, for some reason it's not bring the right number.. Here is my search:     index=redhatinsights | spath | spath path=events{} output=events | stats by _time, events, application, event_type, account_id, context.display_name | mvexpand events | eval _raw=events | kv | table _time | where relative_time(now(), "-30d") <= _time | timechart span=30d count(_time) as count | appendpipe [| stats count | where count=0 | addinfo | eval time=info_min_time." ".info_max_time | makemv time | mvexpand time | table time count | rename time as _time ]       for some reason is not bring all the records, and this time range doesn't make any affect to the result: What's is the right way to use this object and bring the total count of the records in the last 30 days? Thanks!    
Hello, I have a lots of records, some one has account_id field filled.. others has org_id field filled, and some ones both filled.... I'm trying to bring the table  both field (account_id and org_... See more...
Hello, I have a lots of records, some one has account_id field filled.. others has org_id field filled, and some ones both filled.... I'm trying to bring the table  both field (account_id and org_id) but, when I put the org_id into the stats by, bring only a few records, If I remove it, bring all the records, whats I'm doing wrong? Thanks !   Here is my search:       | spath | rename object.* as * | spath path=events{} output=events | mvexpand events | stats by timestamp, events, application, event_type, org_id, account_id, context.display_name | eval _raw=events | kv | table created_at_fmt, account_id, "application", "event_type", "context.display_name", title, url, org_id      
We recently upgraded Splunk Enterprise to 9.0.1 from 8.1.3. The UF's are still on 8.1.3. On the front end Health check, we are getting below error for Forwarder ingestion Latency on SH,CM as well as ... See more...
We recently upgraded Splunk Enterprise to 9.0.1 from 8.1.3. The UF's are still on 8.1.3. On the front end Health check, we are getting below error for Forwarder ingestion Latency on SH,CM as well as Indexers.  Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1581. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1301539. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1301539. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1311. Message from <some_value> Unhealthy Instances: - instance name1 - instance name 2 and so on 
Hi everyone, I have a suspicion that following this order of events, has caused an alert not to trigger when due: 1) I cloned the original alert for testing purposes 2) The 2 alerts find the sa... See more...
Hi everyone, I have a suspicion that following this order of events, has caused an alert not to trigger when due: 1) I cloned the original alert for testing purposes 2) The 2 alerts find the same result and function simultaneously 3) I disabled the cloned alert 4) Original alert not triggering (no email being sent, no events being logged on our alert index...) when Splunk search is being fulfilled. I repeated the search with the Splunk logic and results come back. I have no other explanation than the mentioned above. Has anyone seen this happen before?   Thank you in advance
Hi ,    Splunk adding additional double quotes when I export the data as csv  . When I use the exported file as eventgen sample file, it causing parsing  issues used when insert events using the ... See more...
Hi ,    Splunk adding additional double quotes when I export the data as csv  . When I use the exported file as eventgen sample file, it causing parsing  issues used when insert events using the sample file.   Any suggestions to this issue
Hi, not quite sure how to install this app on splunk cloud. appreciate any help!  https://splunkbase.splunk.com/app/2962#/overview
Hi , We have an add-on which will JSON format for data input. I can export the data as JSON format.   Could you please let me know how to generate events using Eventgen with the exported JSON s... See more...
Hi , We have an add-on which will JSON format for data input. I can export the data as JSON format.   Could you please let me know how to generate events using Eventgen with the exported JSON sample file.
I have suspicious that my outputs.conf configuration files are causing some unwanted data cloning in my forwarders. I am trying to make sense of some weird behavior I am observing, I am hoping someon... See more...
I have suspicious that my outputs.conf configuration files are causing some unwanted data cloning in my forwarders. I am trying to make sense of some weird behavior I am observing, I am hoping someone can fact-check my assumptions for validity, or tell me what if I am not understanding this issue correctly.  I have a UF on a syslog server. On the UF is a variety of apps, only a few of which possess a outputs.conf file.  If I search for outputs.conf files, these are the 4 that I find:     ./apps/SplunkUniversalForwarder/default/outputs.conf ./apps/comp_all_forwarder_outputs/local/outputs.conf ./apps/comp_all_outputs/local/outputs.conf ./system/default/outputs.conf     Based on the conf file hierarchy rules, I would expect that the two with ./local/outputs.conf would take priority over the other two with ./default/outputs.conf. Taking a look at each file, one is specifying indexer peers by FQDN, and the other is specifying the peers as IP addresses. Since both files have the same priority, and they are not the same conf file, would this create a scenario where Splunk sends data to the indexer tier twice (once for each outputs.conf file) cloning the data into the same indexing tier? /opt/splunkforwarder/etc/apps/comp_all_outputs/local/outputs.conf     [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = spkidx01.comp.com:9997, spkidx02.comp.com:9997, spkidx03.comp.com:9997 autoLB = true       /opt/splunkforwarder/etc/apps/comp_all_forwarder_outputs/local/outputs.conf     [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = 10.15.4.229:9997, 10.15.5.85:9997, 10.15.4.250:9997     The IP Addresses listed resolve to the FQDNs in the previous outputs.conf file. I would expect Splunk or maybe the OS would call these two separate outputs.conf files   TIA!    
Hi All, Currently we have a table like below , Target values are fixed for each row but Columns will added dynamically(it can be any month of calendar year) ex: June July August etc.. , these are ... See more...
Hi All, Currently we have a table like below , Target values are fixed for each row but Columns will added dynamically(it can be any month of calendar year) ex: June July August etc.. , these are actually coming from month field, after stats we used chart command to show month names as columns. target    June    July    100         100      96 98             96      100  97             92       93 96             90       91  now based on following conditions  where each cell value is need to compare  with corresponding target value  ex: 100 in June need to compare with  100 in target and 96 in June need to compare with 98 in target so on...           If June>= target -> show the june in green If june -  target < 5% -> show the june in blue If june - target > 5% -> show the june in red expected output