All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My splunk query able to get the required results using below query.  After running the query, I get NULL values in one of the column. As per business requirement i need to replace the NULL values to ... See more...
My splunk query able to get the required results using below query.  After running the query, I get NULL values in one of the column. As per business requirement i need to replace the NULL values to blank or some other values in one of the column name acd2. index=application1 "ProcessWriteBackServiceImpl" "userList" sourcetype="intradiem:iex:ewfm" source="E:\app1\\appsec\\appsec1\\test.log" | rex field=_raw "^(?:[^\[\n]*\[){2}(?P\w+)[^=\n]*=\[(?P\d+)" | eval empid = substr("000000", 0, max(9-len(empid), 0)) . empid | search actiontype="*" empid="*" | stats count by actiontype, empid, _time | table actiontype, empid, _time | join type=inner empid [search index="*" earliest=-24hr latest=now source="D:\\app2\\app_data.csv" | rex field=_raw "^(?P[^,]+),(?P\w+),(?P[^,]+),(?P[^,]+),(?P\d+)\,(?\w+)\,(?P[^,]+),(?P\w+)" | search empid="*" msid="*" muid="*" muname="*" acd="*" acd2="*" lastname="*" firstname="*"] | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | fields - _time | table Time, actiontype, empid, muid, muname, acd,acd2, lastname, firstname   output results   Timeactiontypeempidmuidmunameacdacd2lastnamefirstname 1 2024-19-04 08:10:18 Break 0000000 3302 test 55 NULL sample name sample name 2 2024-19-04 08:14:41 Break 0000000 6140 test 55 NULL sample name sample name 3 2024-19-04 08:35:07 Break 00000000000 1317 test 55 NULL sample name sample name 4 2024-19-04 08:25:41 Break 000000000 1106 test 55 NULL sample name sample name 5 2024-19-04 07:25:19 0 000000000000 6535 test 55 96 sample name sample name
Forgive my lack of knowledge. But the variables : $ingest_URL $SPLUNK_REALM .... Are configured in ITSI? I see that they are necessary for the installation of the collector.    
According to the app's splunkbase page, "Versions 3.0.x and higher can connect to both Splunk Enterprise and Splunk Enterprise Cloud versions 7.3 and higher" so, yes, it is compatible with Splunk Clo... See more...
According to the app's splunkbase page, "Versions 3.0.x and higher can connect to both Splunk Enterprise and Splunk Enterprise Cloud versions 7.3 and higher" so, yes, it is compatible with Splunk Cloud.
I can't help if I don't understand what the goal is.  Once we have a deterministic way to set the service name I may be able to help.
Could I get by creating Simple Log path by port (https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/simple/) ?  
If you use ITSI or ITE you could install it but it is not essential to ingest data via OTel.
Is the Splunk ODBC "deployment" compatible with Splunk Cloud? For example, following this guide. Would it be possible to setup a cloud instance instead of a local/Enterprise URL?
It sounds like you have created a custom syslog app with custom application type of data and its not one of the common NETWORK  syslog sources...this means it’s not going to be parsed and formatted a... See more...
It sounds like you have created a custom syslog app with custom application type of data and its not one of the common NETWORK  syslog sources...this means it’s not going to be parsed and formatted and handled by SC4S, therefore your options are:   Option 1. See if the SC4S community can create one for you (As this sounds like it’s NOT network data then you might have issues as it sounds like a custom application data. SC4S is not designed to handle OS or Application data. You can log an issue here https://github.com/splunk/splunk-connect-for-syslog and maybe they can help. You will need to send a PCAP file. (I doubt if this is feasible, so then look at option 2)    Option 2. Install a normal syslog server (syslog-ng or R-syslog) and configure it as opposed to using SC4S as its primarily designed to handle common network syslog data sources. Send your custom syslog app data to the server running normal (syslog-ng or r-syslog) and configure it log the data into text files into a folder. Install a Splunk UF and configure it to monitor (inputs.conf) your log files and send to Splunk cloud via outputs.conf. The Splunk UF will pick those up and then using outputs.conf send that data to Splunk cloud. You then need to create a TA to parse the custom syslog raw data, so apply metadata, sourcetype, fields, extraction and ensure the timestamp etc are all correct, then install the custom TA in Splunk cloud.
Thank you very much for your answer. An initial and basic doubt: Content Pack for Splunk Observability Cloud must be installed on the enterpise environment. Correct?   BR  
@ITWhisperer - Refer the below comments inline: Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? -- No. "AP sent to" or ... See more...
@ITWhisperer - Refer the below comments inline: Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? -- No. "AP sent to" or "AH sent to" or "MP sent to" events always exist with "---> TRN:" Similarly, are there events where "---> TRN:" exists and one of "AP sent to" or "AH sent to" or "MP sent to" does not exists? -- No. "---> TRN:" events always exist with "AP sent to" or "AH sent to" or "MP sent to" Please can you explain the significance of the dropdown and how it determines which events are counted? > This dropdown is to make the Dashboard looks simpler. That is based on the Priority of Low, Medium or High will show the Transaction Pending volume.  Or in case, if you have other idea to handle the same - kindly suggest the same.
Do the "new" keys start with $7$? If yes, they are encrypted.
I wouldn't personally start with the Add-On because it just provide you the configuration but to get an real understanding of the otel collector you should check out some documentation. To collect m... See more...
I wouldn't personally start with the Add-On because it just provide you the configuration but to get an real understanding of the otel collector you should check out some documentation. To collect metrics and send them to your HTTP Event Collector endpoint of your Splunk Enterprise environment you should follow these documentations Install the Collector for Linux with the installer script — Splunk Observability Cloud documentation Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on a Linux host — Splunk Observability Cloud documentation Collector for Linux default configuration — Splunk Observability Cloud documentation Splunk HEC exporter — Splunk Observability Cloud documentation Following metrics are collected by default Collected metrics for Linux — Splunk Observability Cloud documentation If you have specific questions just let me know.  
So my application sends data in RFC5424 format. It a test c# application running my local which basically sends data through a udp client in RFC5424 format  to an ec2instance which runs sc4s inside d... See more...
So my application sends data in RFC5424 format. It a test c# application running my local which basically sends data through a udp client in RFC5424 format  to an ec2instance which runs sc4s inside docker. The logs don't help because I don't see  anything after  starting goss starting syslog-ng I am not aware if I have to configure anything in splunk cloud
So the the echo works - you can see data in Splunk, but your syslog APP which sends syslog data is not visable in Splunk and tcpdump shows that the APP is sending data to SC4S. Things to check: 1. ... See more...
So the the echo works - you can see data in Splunk, but your syslog APP which sends syslog data is not visable in Splunk and tcpdump shows that the APP is sending data to SC4S. Things to check: 1. Check the “No data in Splunk" section - https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_SC4S_server/#hectoken-connection-errors-aka-no-data-in-splunk Restart sc4s and look at the logs /usr/bin/<podman|docker> logs SC4S 2. Is your syslog APP a common syslog source and supported by SC4S? 3. Is your syslog APP in the known SC4S vendors list? 4. Check if it need some special enviromental config for the /opt/sc4s/env_file (Example look at the McAfee known source, it has a number of configuration options, indexes, ports, TA's env file config see this example - https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/McAfee/epo/) 5. Check the /opt/sc4s/env_file ensure the settings for your syslog APP are set here. 6. Check /opt/sc4s/local/context/splunk_metadata.csv Ensure the keyname (You App source), and ensure its mapped to the correct index in cloud 7. Have you deployed the correct TA's for your syslog APP onto Splunk cloud.
Hi, yes the code is here Codec-Report-Batch-Python/br_uncompress.py at main · Watteco/Codec-Report-Batch-Python · GitHub Thanks
Since Splunk 6.x is not more available, the new URL's are: Forwarder Manual: https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/Installanixuniversalforwarder..  Installation on Ma... See more...
Since Splunk 6.x is not more available, the new URL's are: Forwarder Manual: https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/Installanixuniversalforwarder..  Installation on MacOS: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/InstallonMacOS 
Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? Similarly, are there events where "---> TRN:" exists and one of "AP sent t... See more...
Are there lines where "AP sent to" or "AH sent to" or "MP sent to" exist in events without "---> TRN:" also being presents? Similarly, are there events where "---> TRN:" exists and one of "AP sent to" or "AH sent to" or "MP sent to" does not exists? Please can you explain the significance of the dropdown and how it determines which events are counted?
Is this TA hosted somewhere so we could have a better picture of what the complete python code looks like? 
Hi Hassan, This is a generic 401 authentication problem. When you send metrics or valid messages from agent to controller there are several things you have to configure properly below host, port, a... See more...
Hi Hassan, This is a generic 401 authentication problem. When you send metrics or valid messages from agent to controller there are several things you have to configure properly below host, port, account name (default "customer1" if you don't use the controller in a multi-tenant mode), and account key. So basically there are 2 reasons that you need to focus.  First can you please control this step below given, and try again?  To create a secret with a Controller access key: $ kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> Thanks Cansel
When I run  echo '<14>1 2024-04-19T12:34:56.789Z myhostname myapp 12345 - [exampleSDID@32473 iut="3" eventSource="application" eventID="1011"] Something happened through echoing.' > /dev/udp/127.0.0... See more...
When I run  echo '<14>1 2024-04-19T12:34:56.789Z myhostname myapp 12345 - [exampleSDID@32473 iut="3" eventSource="application" eventID="1011"] Something happened through echoing.' > /dev/udp/127.0.0.1/514 I am able to see it in Splunk. But when my application is sending syslog on port 514, it does not appear on Splunk although the same message is visible when I run TCP dump on port 514.  What would I be missing here? To reply to your question, I believe I have followed the steps in runtime configuration (https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/getting-started-runtime-configuration/)