All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you very much for your answer. An initial and basic doubt: Content Pack for Splunk Observability Cloud must be installed on the enterpise environment. Correct?   BR  
@ITWhisperer - Refer the below comments inline: Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? -- No. "AP sent to" or ... See more...
@ITWhisperer - Refer the below comments inline: Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? -- No. "AP sent to" or "AH sent to" or "MP sent to" events always exist with "---> TRN:" Similarly, are there events where "---> TRN:" exists and one of "AP sent to" or "AH sent to" or "MP sent to" does not exists? -- No. "---> TRN:" events always exist with "AP sent to" or "AH sent to" or "MP sent to" Please can you explain the significance of the dropdown and how it determines which events are counted? > This dropdown is to make the Dashboard looks simpler. That is based on the Priority of Low, Medium or High will show the Transaction Pending volume.  Or in case, if you have other idea to handle the same - kindly suggest the same.
Do the "new" keys start with $7$? If yes, they are encrypted.
I wouldn't personally start with the Add-On because it just provide you the configuration but to get an real understanding of the otel collector you should check out some documentation. To collect m... See more...
I wouldn't personally start with the Add-On because it just provide you the configuration but to get an real understanding of the otel collector you should check out some documentation. To collect metrics and send them to your HTTP Event Collector endpoint of your Splunk Enterprise environment you should follow these documentations Install the Collector for Linux with the installer script — Splunk Observability Cloud documentation Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on a Linux host — Splunk Observability Cloud documentation Collector for Linux default configuration — Splunk Observability Cloud documentation Splunk HEC exporter — Splunk Observability Cloud documentation Following metrics are collected by default Collected metrics for Linux — Splunk Observability Cloud documentation If you have specific questions just let me know.  
So my application sends data in RFC5424 format. It a test c# application running my local which basically sends data through a udp client in RFC5424 format  to an ec2instance which runs sc4s inside d... See more...
So my application sends data in RFC5424 format. It a test c# application running my local which basically sends data through a udp client in RFC5424 format  to an ec2instance which runs sc4s inside docker. The logs don't help because I don't see  anything after  starting goss starting syslog-ng I am not aware if I have to configure anything in splunk cloud
So the the echo works - you can see data in Splunk, but your syslog APP which sends syslog data is not visable in Splunk and tcpdump shows that the APP is sending data to SC4S. Things to check: 1. ... See more...
So the the echo works - you can see data in Splunk, but your syslog APP which sends syslog data is not visable in Splunk and tcpdump shows that the APP is sending data to SC4S. Things to check: 1. Check the “No data in Splunk" section - https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_SC4S_server/#hectoken-connection-errors-aka-no-data-in-splunk Restart sc4s and look at the logs /usr/bin/<podman|docker> logs SC4S 2. Is your syslog APP a common syslog source and supported by SC4S? 3. Is your syslog APP in the known SC4S vendors list? 4. Check if it need some special enviromental config for the /opt/sc4s/env_file (Example look at the McAfee known source, it has a number of configuration options, indexes, ports, TA's env file config see this example - https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/McAfee/epo/) 5. Check the /opt/sc4s/env_file ensure the settings for your syslog APP are set here. 6. Check /opt/sc4s/local/context/splunk_metadata.csv Ensure the keyname (You App source), and ensure its mapped to the correct index in cloud 7. Have you deployed the correct TA's for your syslog APP onto Splunk cloud.
Hi, yes the code is here Codec-Report-Batch-Python/br_uncompress.py at main · Watteco/Codec-Report-Batch-Python · GitHub Thanks
Since Splunk 6.x is not more available, the new URL's are: Forwarder Manual: https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/Installanixuniversalforwarder..  Installation on Ma... See more...
Since Splunk 6.x is not more available, the new URL's are: Forwarder Manual: https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/Installanixuniversalforwarder..  Installation on MacOS: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/InstallonMacOS 
Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? Similarly, are there events where "---> TRN:" exists and one of "AP sent t... See more...
Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? Similarly, are there events where "---> TRN:" exists and one of "AP sent to" or "AH sent to" or "MP sent to" does not exists? Please can you explain the significance of the dropdown and how it determines which events are counted?
Is this TA hosted somewhere so we could have a better picture of what the complete python code looks like? 
Hi Hassan, This is a generic 401 authentication problem. When you send metrics or valid messages from agent to controller there are several things you have to configure properly below host, port, a... See more...
Hi Hassan, This is a generic 401 authentication problem. When you send metrics or valid messages from agent to controller there are several things you have to configure properly below host, port, account name (default "customer1" if you don't use the controller in a multi-tenant mode), and account key. So basically there are 2 reasons that you need to focus.  First can you please control this step below given, and try again?  To create a secret with a Controller access key: $ kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> Thanks Cansel
When I run  echo '<14>1 2024-04-19T12:34:56.789Z myhostname myapp 12345 - [exampleSDID@32473 iut="3" eventSource="application" eventID="1011"] Something happened through echoing.' > /dev/udp/127.0.0... See more...
When I run  echo '<14>1 2024-04-19T12:34:56.789Z myhostname myapp 12345 - [exampleSDID@32473 iut="3" eventSource="application" eventID="1011"] Something happened through echoing.' > /dev/udp/127.0.0.1/514 I am able to see it in Splunk. But when my application is sending syslog on port 514, it does not appear on Splunk although the same message is visible when I run TCP dump on port 514.  What would I be missing here? To reply to your question, I believe I have followed the steps in runtime configuration (https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/getting-started-runtime-configuration/)
@richgalloway  I agree to your point. I tried using case statement as well . Unfortunately its not working as expected. Do you know any other way to handle this ? That really helps me. I am also re-s... See more...
@richgalloway  I agree to your point. I tried using case statement as well . Unfortunately its not working as expected. Do you know any other way to handle this ? That really helps me. I am also re-searching.
You recognize that this is a Splunk forum where volunteers offer help related to Splunk, right?  As I said, Splunk does not "color" search results. (The only coloring function in Splunk is provided i... See more...
You recognize that this is a Splunk forum where volunteers offer help related to Splunk, right?  As I said, Splunk does not "color" search results. (The only coloring function in Splunk is provided in dashboard visualization.)  If you want to color text, you will need to develop something external to Splunk.  As you suggested, you can possibly achieve this by modifying sendmail.py (not recommended).  Alternatively, you can develop a custom command for this.  Either way, this is not the right forum.
From your the logs it shows: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events (So run this and check the main index - if you can see this then your the connection is wo... See more...
From your the logs it shows: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events (So run this and check the main index - if you can see this then your the connection is working. In terms of the /opt/sc4s/local/context/splunk_index.csv follow all the steps from the below run time configuration, there are a number of stpes and you need to complete them all.  https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/getting-started-runtime-configuration/ As you can send curl test events to cloud you don't need whitelist (BUT its best practise to have them in place for security reasons.) 
That's great news @Ryan.Paredez, thx for keeping me updated! I'm looking forward to the 24.4 release. Cheers, Jerg
mvzip, mvexpand and mvindex are simply wrong tools for your data structure. (Well, mvexpand will be needed, but only after you properly handle the array in your data.)  As everybody in this post has ... See more...
mvzip, mvexpand and mvindex are simply wrong tools for your data structure. (Well, mvexpand will be needed, but only after you properly handle the array in your data.)  As everybody in this post has pointed out: You need to post sample or precise mock data to reveal the structure. (In text, never screenshot.) This is extremely important when asking question about data analytics in a forum.  When you force volunteers to read your mind, not only will they get FRUSTRATED, but even if they are willing, most of the time their mind reading will be incorrect. This said, based on your code, I kind of picture together a rough structure of your data.  I will use JSON to illustrate.  Something like   { "userActions": [ { "application": "app1", "name": "action1", "targetUrl": "url1", "duration": 1234, "type": "actiontype1", "apdexCategory": "SATISFIED" }, { "application": "app1", "name": "action2", "targetUrl": "url1", "duration": 2345, "type": "actiontype1", "apdexCategory": "DISATISFIED" }, { "application": "app1", "name": "action3", "targetUrl": "url2", "duration": 3456, "type": "actiontype2", "apdexCategory": "FRUSTRATED" } ], "userExperienceScore": "FRUSTRATED", "events": [ {"application": "xxx", "irrelevant": "aaa"}, {"application": "yyy", "irrelevant": "bbb"} ] }     Your event could be in JSON or it could be in XML, but it contains at least two arrays, events[] and userActions[].  Is this correct?  The array events[] is not what frustrates you because its elements and components are no longer needed after initial search.  Your end goal from the above three elements of userActions[] is to pick out   { "application": "app1", "name": "action3", "targetUrl": "url2", "duration": 3456, "type": "actiontype2", "apdexCategory": "FRUSTRATED" }   and display it in this format: _time Application Action Target_URL Duration_in_Mins User_Action_Type useractions_experience_score 2024-04-18 22:45:22 app1 action3 url2 0.06 actiontype2 FRUSTRATED If the above looks close, the first thing you need to do is to forget all about Splunk's flattened fields userActions{}.*; in fact, discard them all.  Use spath to reach elements of this array, then mvexpand over the elements, no funny mvzip business.  After that, everything becomes trivial. Using my speculated data, I can reconstruct your SPL into the following to obtain my illustrated output:     index="xxx" sourcetype="xxx" source=xxx events{}.application="xxx" userExperienceScore=FRUSTRATED | fields - userActions{}.* | spath path=userActions{} | mvexpand userActions{} | spath input=userActions{} | dedup application name targetUrl | search apdexCategory = FRUSTRATED application = * name = * | sort - _time | rename application as Application, name as Action, targetUrl as Target_URL, type as User_Action_Type, apdexCategory as useractions_experience_score | eval Duration_in_Mins = round(duration / 60000, 2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score   Hope this helps. Here is an emulation of my speculated data.  Play with it and compare with real data | makeresults | eval _raw = "{ \"userActions\": [ { \"application\": \"app1\", \"name\": \"action1\", \"targetUrl\": \"url1\", \"duration\": 1234, \"type\": \"actiontype1\", \"apdexCategory\": \"SATISFIED\" }, { \"application\": \"app1\", \"name\": \"action2\", \"targetUrl\": \"url1\", \"duration\": 2345, \"type\": \"actiontype1\", \"apdexCategory\": \"DISATISFIED\" }, { \"application\": \"app1\", \"name\": \"action3\", \"targetUrl\": \"url2\", \"duration\": 3456, \"type\": \"actiontype2\", \"apdexCategory\": \"FRUSTRATED\" } ], \"userExperienceScore\": \"FRUSTRATED\", \"events\": [ {\"application\": \"xxx\", \"irrelevant\": \"aaa\"}, {\"application\": \"yyy\", \"irrelevant\": \"bbb\"} ] }" | spath ``` data speculation for index="xxx" sourcetype="xxx" source=xxx events{}.application="xxx" userExperienceScore=FRUSTRATED ```
Thank you for your kind response @ITWhisperer I have made the correction in the 2nd query: "<===" was referring from a different log event. Updated query:  source=/applications/test/*instance_xyz* ... See more...
Thank you for your kind response @ITWhisperer I have made the correction in the 2nd query: "<===" was referring from a different log event. Updated query:  source=/applications/test/*instance_xyz* ("<--- TRN:" OR "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to") Refer below inline response to your question: The main issue with your request is that you haven't explained how the events are to be correlated between the two sources and how you would like to count them to give the desired result. Answer: There are basically 2 log files. "testget.log" using search criteria as "<--- TRN:" and Priority field information. "testput.log" using search criteria as "---> TRN:" OR "AP sent to" OR "AH sent to" OR "MP sent to" I need help to co-relate these 2 logs based on TRN. And final count I need to get it using TRN and TestMQ. Select: Low, Medium, High (From the Dashboard dropdown) Output Expected: TestMQ| Low-Testget | Low-Testput | Low-AP | Low-AH | Low-MP | Low-Pending TestMQ | Medium-Testget | Medium-Testput | Medium-AP | Medium-AH | Medium-MP | Medium-Pending TestMQ | High-Testget | High-Testput | High-AP | High-AH | High-MP | High-Pending Please suggest.
Thank you, @Cansel.OZCAN  this information helped me alot.
Have you tried my previous code? | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null()) This does exactly what you ask: create a new field named route that has a fixed p... See more...
Have you tried my previous code? | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null()) This does exactly what you ask: create a new field named route that has a fixed pattern "/order/{orderID}".  Is there anything wrong with this? In fact, because you really only care about first segment of the path - that fixed string "{orderID}" is just a decoration, the command could be simplified to slightly less expensive | eval route = "/" . mvindex(split(request_path, "/"), 1) . "/{orderID}" You can do whatever analysis against this field.