All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This can be caused by syslog not supporting newlines(\n). The following settings on the HF will improve this. props.conf [your-sourcetype] TRANSFORMS-◯◯ = transname transforms.conf [transna... See more...
This can be caused by syslog not supporting newlines(\n). The following settings on the HF will improve this. props.conf [your-sourcetype] TRANSFORMS-◯◯ = transname transforms.conf [transname] INGEST_EVAL = _raw=replace(_raw, "\n", " ")
Does the POST endpoint work for Splunk Cloud? The documentation linked is for Enterprise. The original question, and mine is regarding Splunk Cloud platform.  https://${STACK_NAME}.splunkcloud.com:8... See more...
Does the POST endpoint work for Splunk Cloud? The documentation linked is for Enterprise. The original question, and mine is regarding Splunk Cloud platform.  https://${STACK_NAME}.splunkcloud.com:8089/servicesNS/nobody/${APP_NAME}/properties/${CONF_NAME}/${STANZA} SCP has limitations on what admins can do via the API for modifying app configurations. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/RESTTUT/RESTandCloud 
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events relat... See more...
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events related to this sourcetype are ingested by a universal forwarder with the settings below: props.conf   [exch_file_trans-front-recv] ANNOTATE_PUNCT = false FIELD_HEADER_REGEX = ^#Fields:\s+(.*) SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = date_time BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 24 initCrcLength = 256 TRANSFORMS-no_column_headers = no_column_headers   transforms.conf   [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   In this sourcetype I have some events that I want to delete before indexing. You can see an example below:   2024-08-22T12:58:31.274Z,Sever01\Domain Infrastructure Sever01,08DCC212EB386972,6,172.25.57.26:25,172.21.255.8:29635,-,,Local   So, I'm interested in deleting events with the pattern '...172.21.225.8:....,'. To do it, I created some settings on the indexer cluster layer: props.conf   [exch_file_trans-front-recv] TRANSFORMS-remove_trash = exch_file_trans-front-recv_rt0   transforms.conf   [exch_file_trans-front-recv_rt0] REGEX = ^.*?,.*?,.*?,.*?,.*?,172.21.255.8:\d+, DEST_KEY = queue FORMAT = nullQueue   After applying this configuration across the indexer cluster, I still observe new events with the presented pattern. What am I doing wrong?
Hi @claudio_manig , I am trying to do as you wrote on the outputs.conf but it still has header problems. Can you provide me a practical example please? Thank you so much for your kindness and helpf... See more...
Hi @claudio_manig , I am trying to do as you wrote on the outputs.conf but it still has header problems. Can you provide me a practical example please? Thank you so much for your kindness and helpfulness, Giulia
If I understand you right, you want to start working with the events ingested into the SOAR platform, where your playbooks might all similarly start by retrieving each container's artifact data? If ... See more...
If I understand you right, you want to start working with the events ingested into the SOAR platform, where your playbooks might all similarly start by retrieving each container's artifact data? If so, I find myself relying on SOAR's code nodes more often than not to get the level of data I want.  Within your first line of custom code, you gain myriad prepopulated variables to access 'raw' data from the prior node and overall container/event data: try printing some of those parameters across the top. Despite those params, I generally rely on REST queries to obtain artifact data, like much of the client-side code itself.  Install the HTTP app, and create an asset that points to 127.0.0.1/rest.  Make sure one of your parameters includes a REST access token/header from some User.  Then, your PBs can call that HTTP app action node to GET/PUT/POST whatever, specifically "https://..host../rest/artifact?_filter_container=#####", whose results will include a 'cef' key with the verbatim artifact(s) data available for you to directly consume, modify, or simply pass forward into future nodes. lmk if I'm way off base, but this is generally how I manipulate individual container artifact data, inside and outside of individual playbooks.
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded s... See more...
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded successfully. But I didn't see any measures in the controller. Please check the below URL (XHR Request) I am expecting in the controller. GET /signalr/connect transport=webSockets&clientProtocol=2.1&connectionToken=VRaaiTyPGpv6nQYxV59QI3x6IGjDEvSSCf1ANWpXALK0c6DjkOh9vFnl5MPGlMl4qJWFSAYWcx0HIpiIHBb0HOGSeawT%2FofowF35o5aqOAgrzeYeaAs9spjxBBg6qknK&connectionData=%5B%7B%22name%22%3A%22chathub%22%7D%5D&tid=10 Apart from custom instrumentation since I don't know the application class and function information, is there a way to capture this transaction? 
The service already running, i used from this source : https://github.com/signalfx/splunk-otel-collector/releases
I'm using Red Hat Enterprise Linux release 8.10 (Ootpa), and have same error when install on Ubuntu 22.04.4
works great, many thanks!   ... and for multi-select i.e. links input type , this is for tooltip of each of the choices:     $('#i_stats > div > div > div').find('button[label="kosten"]').attr('t... See more...
works great, many thanks!   ... and for multi-select i.e. links input type , this is for tooltip of each of the choices:     $('#i_stats > div > div > div').find('button[label="kosten"]').attr('title','Kosten in Euro').attr('data-toggle','tooltip').attr('data-placement','bottom');
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but ... See more...
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but for development purposes, I’ve hardcoded the REST API URL, username, and password. Now, I want to enhance the app so it dynamically uses the current session’s user authentication rather than relying on hardcoded credentials. Here’s the idea I’m aiming for: When a user (e.g., "user1" with admin roles) logs into Splunk, their session credentials (like session key or authentication token) are stored somewhere, right? I need to capture those credentials in my React app. Does this approach make sense? I’m looking for advice on how to retrieve and use the session credentials, token, or session key for the logged-in user in my "User Management" React app. Here’s the current code I’m using to fetch all users (with hardcoded credentials):   // Fetch user data from the API const fetchAllUsers = async () => { try { const response = await axios.get('https://localhost:8089/services/authentication/users', { auth: { username: 'admin', password: 'changeme' }, headers: { 'Content-Type': 'application/xml' } }); // Handle response } catch (error) { console.error('Error fetching users:', error); } }; I also tried retrieving the session key using this cURL command: curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=changeme However, I’m still hardcoding the username and password, which isn’t ideal. My goal is for the React app to automatically use the logged-in user’s session credentials (session key or authentication token) and retrieve the hostname of the deployed environment. Additionally, I’m interested in understanding how core Splunk user management operates and handles authorizations. My current approach might be off, so I’m open to learning the right way to do this. Can anyone guide me on how to achieve this in the "User Management" React app? Any advice or best practices would be greatly appreciated! Thanks in advance!
Okay, which os distribution and version do you run?
Hi @PaulPanther    I run from guide : curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ sudo sh /tmp/splunk-otel-collector.sh --realm au0 -- ******... See more...
Hi @PaulPanther    I run from guide : curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ sudo sh /tmp/splunk-otel-collector.sh --realm au0 -- ********************** --mode agent --with-instrumentation --discovery  
Could you please provide the steps that you've executed before?
Thank you for your time. But I fixed the error, by going to the search & reporting -> Dashboard app, I selected a dashbord and clicked ‘set as Home dashboard’. I went back to the Home page, found t... See more...
Thank you for your time. But I fixed the error, by going to the search & reporting -> Dashboard app, I selected a dashbord and clicked ‘set as Home dashboard’. I went back to the Home page, found the dashboard I'd just assigned, then deleted it, and the error disappeared.
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-co... See more...
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-collector.service - Splunk OpenTelemetry Collector Loaded: loaded (/usr/lib/systemd/system/splunk-otel-collector.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/splunk-otel-collector.service.d └─service-owner.conf Active: failed (Result: exit-code) since Thu 2024-08-22 16:30:11 WIB; 273ms ago Process: 2760714 ExecStart=/usr/bin/otelcol $OTELCOL_OPTIONS (code=exited, status=1/FAILURE) Main PID: 2760714 (code=exited, status=1/FAILURE) Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Aug 22 16:30:11 rhel systemd[1]: Stopped Splunk OpenTelemetry Collector. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Aug 22 16:30:11 rhel systemd[1]: Failed to start Splunk OpenTelemetry Collector.
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant f... See more...
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant find anything in the index that I configured it. 1 - From the "DB Connect Input Health" I see no errors and it shows events from the input I created every x minutes (exactly as I configured it). It also shows this metric that also confirm that there are data been returned in the execution: DBX - Input Performance - HEC Median Throughput Search is completed 0.0465 MB   2 - From "index=_internal pg6 source="/opt/splunk/var/log/splunk/splunk_app_db_connect_server.log"" I can see that it: Job 'my_input_name' started Job 'my_input_name' stopping Job 'my_input_name' finished with status: COMPLETED 3 - If I search the index I created for it, it is empty. 4 - splunk_app_db_connect 3.9.0 Thanks for any light!
Hello, could you please provide some sample data?
You have to use latest and earliest to get the oldest and most recent event.
HI @susheelpatil1  Did you have resolve the issue?i'm new on splunk and have same error    
Please share the spl query and some sample data. The data format hasn't been changed in the meanwhile?