All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZ... See more...
Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  Details about the failure   along with above details there are lot of details in the log but these are the detail i need to create a dashboard, Can anyone please help me on how to extract all the above field. and how can a create a dashboard for how many request are successful along with details about the success request like IP and service name etc. Thanks a lot in advance. Regards, AKM
I have tried with below suggestions, but still not working.   Though the throttling error and timeout errors are cleaned. I have checked the permission also on Azure for the client we are using. N... See more...
I have tried with below suggestions, but still not working.   Though the throttling error and timeout errors are cleaned. I have checked the permission also on Azure for the client we are using. Now error logs are: 05-13-2025 12:07:30.690 +0000 INFO ExecProcessor [326053 ExecProcessor] - Removing status item "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_signins.py (MS_AAD_signins://SignInDetails) (isModInput=yes)   05-13-2025 06:41:33.207 +0000 ERROR UiAuth [46016 TcpChannelThread] - Request from 122.169.17.168 to "/en-US/splunkd/__raw/servicesNS/nobody/TA-MS-AAD/TA_MS_AAD_MS_AAD_signins/SignInDetails?output_mode=json" failed CSRF validation -- expected key "[REDACTED]8117" and header had key "10508357373912334086"  
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector ag... See more...
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector agent running on the host. I have a couple of questions: Can the ingest endpoint also be used for profiling, similar to how it's used for metrics? If yes, could you please share the exact endpoint format or a link to the relevant documentation? I attempted to enable profiling by pointing to the same Collector endpoint used for tracing. The logs indicate that the profiler is enabled, but I’m also seeing a message saying "Exporter failed", without a specific reason for the failure. Could you help me troubleshoot this issue? Here are the relevant log entries: com.splunk.opentelemetry.profiler.ConfigurationLogger - ----------------------- com.splunk.opentelemetry.profiler.ConfigurationLogger - Profiler configuration: com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.enabled : true com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.directory : /tmp com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.recording.duration : 20s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.keep-files : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.logs-endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - otel.exporter.otlp.endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tlab.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.event.rate : 150/s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.include.internal.stacks : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tracing.stacks.only : false com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active. com.splunk.opentelemetry.profiler.EventProcessingChain - In total handled 151 events in 32ms io.opentelemetry.sdk.logs.export.SimpleLogRecordProcessor - Exporter failed Any help in understanding the root cause and resolving the export failure would be appreciated.
Ok. If I understand you correctly, you are using UFs which send data directly to indexers. And those indexers index locally as well as send a copy to a syslog destination, right? And you're doing tha... See more...
Ok. If I understand you correctly, you are using UFs which send data directly to indexers. And those indexers index locally as well as send a copy to a syslog destination, right? And you're doing that by defining transforms manipulating _SYSLOG_ROUTING on the indexers. Do I get it right? In this case, data already processed by other "full" Splunk Enterprise components (SHs, CM and so on) is _not_ processed by the indexers. tl;dr - You must create syslog outputs and transforms for Splunk-originating events on the source servers (SHs, CM...) as well. You might be able to try to address your problem with ingest actions but I'm no expert here. Longer explanation: Data in Splunk can be in one of three "states". Normally an input reads raw data. This raw data - if received on UF - is split into chunks and sent to an output as so-called "cooked data". This data is not yet split into separate events, it's not timestamped... It's just chunks of raw data along with a very basic set of metadata. If raw data from input or cooked data from UF is received by a "heavy" component (a full Splunk Enterprise instance, regardless of its role) it's getting parsed - the data is split into single events, timestamp is assigned to those events, indexed fields are extracted and so on. At this point we have data which is "cooked and parsed", often called just "parsed" for short. Depending on server's role that data might be indexed locally or sent to output(s). But if parsed data is received on an input it's not touched again except for earlier mentioned ingest actions. It's not reparsed again, no transforms are run on the data you receive in parsed form. So if you're receiving internal data from your Splunk servers, that data ihas already been parsed on the source Splunk server - any transforms you have defined on your indexers do not apply to this data.
@tah7004  OK! Bellow is the answer you talk about. 1.  $SPLUNK_HOME/etc/apps/myapp/local/props.conf TRANSFORMS-ingest_time_lookup = lookup_extract   2.  $SPLUNK_HOME/etc/apps/myapp/local/transf... See more...
@tah7004  OK! Bellow is the answer you talk about. 1.  $SPLUNK_HOME/etc/apps/myapp/local/props.conf TRANSFORMS-ingest_time_lookup = lookup_extract   2.  $SPLUNK_HOME/etc/apps/myapp/local/transforms.conf [lookup_extract] INGEST_EVAL= field1=replace(_raw, ".*field1=([0-9A-Za-Z.]+).*", "\1"), field2=replace(_raw, ".*field2=([0-9A-Za-Z.]+).*", "\1"), field3=json_extract(lookup("test.csv", json_object("field1", new_field, "field2", field2), json_array("field3")),"field3")  
O wow - this is great, and thanks very much for the efforts on this.  Cheers Robert
Hi Team, How to optimize the startup process of the AppDynamics Events Service Cluster (Event Service & Elastic Search) within the system and service manager of the operating system, including imple... See more...
Hi Team, How to optimize the startup process of the AppDynamics Events Service Cluster (Event Service & Elastic Search) within the system and service manager of the operating system, including implementing self-healing mechanisms to automatically detect and resolve any issues that may arise during startup for below scenarios. 1. Graceful Shutdown 2. Unplanned Downtime (unexpected VM shutdown) 3. Accidentally kill the process (Event service and Elastic) 4. Optional: OOM (Out Of Memory)
THANKS for the ESCU companion app hint. That's quite a good idea alongside an automatic merge concept I'm developing and producing a report for Analyst what to do. thanks for that and will mark as an... See more...
THANKS for the ESCU companion app hint. That's quite a good idea alongside an automatic merge concept I'm developing and producing a report for Analyst what to do. thanks for that and will mark as answered
Hi @Navneet_Singh  This is a known issue with version 4.0.5: "Not able to remove records from the CSV lookup file in Splunk App for Lookup File Editing version 4.0.5" (LOOKUP-300 ref) For more info... See more...
Hi @Navneet_Singh  This is a known issue with version 4.0.5: "Not able to remove records from the CSV lookup file in Splunk App for Lookup File Editing version 4.0.5" (LOOKUP-300 ref) For more info please check out the Known Issues section of the Release Notes: https://docs.splunk.com/Documentation/LookupEditor/4.0.5/User/Knownissues    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @MsF-2000  How long do the searches take to execute when you run the dashboard yourself? It sounds like you might be being impacted by "render_chromium_timeout" which is the max amount of time c... See more...
Hi @MsF-2000  How long do the searches take to execute when you run the dashboard yourself? It sounds like you might be being impacted by "render_chromium_timeout" which is the max amount of time chromium (which generates the PDF) waits before taking the export. The default value for this is 30 seconds: render_chromium_timeout = <unsigned integer> * The number of seconds after which the Chromium engine will timeout if the engine still needs to render the dashboard output. This setting does not impact the render_chromium_screenshot_delay. * Default: 30 Alternatively you could look at increasing render_chromium_screenshot_delay (0 by default). Check out the specific docs for these settings at https://docs.splunk.com/Documentation/Splunk/latest/Admin/limitsconf#:~:text=3600%20(60%20minutes)-,render_chromium_timeout,-%3D%20%3Cunsigned%20integer%3E%0A*%20The Its also worth checking out "Modify the timeout setting for rendering the dashboard"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,   Please find below,   So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, r... See more...
Hi,   Please find below,   So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rather than the source? If so I guess this rules out connectivity issue. : yes using syslog,   [syslog:xx_syslog] server = 1.x.1.2:514 type = udp priority = <13> timestampformat = %b %e %T Lastly, how have you configured the other data sources to send from the indexers to Qradar? Please share config examples of how you've achieved this so we can see if there is an issue here. Props.conf for cisco logs   [cisco:ios] TRANSFORMS-soc_syslog_out = send_to_soc Tranforms.conf   [send_to_soc] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = soc_syslog
Thank you for your reply. So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rathe... See more...
Thank you for your reply. So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rather than the source? If so I guess this rules out connectivity issue. Lastly, how have you configured the other data sources to send from the indexers to Qradar? Please share config examples of how you've achieved this so we can see if there is an issue here.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
 Your architecture: UF-->IDX-->SH :Two sites with CM/LM cluster : each site has 1 IDX, SH, CM, LM, for standby site CM/LM splunk service is stopped. 2) Your configuration pertaining to data inge... See more...
 Your architecture: UF-->IDX-->SH :Two sites with CM/LM cluster : each site has 1 IDX, SH, CM, LM, for standby site CM/LM splunk service is stopped. 2) Your configuration pertaining to data ingestion and data flow.: We are using as indexer to send the data to 3rd party, all the data is received at remote end except the Splunk win components, also able to send indexer server logs to 3rd party.
As far as I remember (but I'm no Cloud expert so you can double-check it) when subscribing to Splunk Cloud you have a choice between AWS and GCP hosting. And, to add to this confusion if you don'... See more...
As far as I remember (but I'm no Cloud expert so you can double-check it) when subscribing to Splunk Cloud you have a choice between AWS and GCP hosting. And, to add to this confusion if you don't want Splunk to manage whole infrastructure for you (it has its pros and its cons) you can also just deploy your own "on-premise" Splunk Enterprise environment on your own cloud of choice VM instances. But this has nothing to do with Splunk Cloud. It would still be Splunk Enterprise.
Not necessarily. You can use an output of a function operating on _raw as argument to the lookup() function.
Ok. Wait. You're asking about something not working in a relatively unusual setup. So firstly describe with details: 1) Your architecture 2) Your configuration pertaining to data ingestion and da... See more...
Ok. Wait. You're asking about something not working in a relatively unusual setup. So firstly describe with details: 1) Your architecture 2) Your configuration pertaining to data ingestion and data flow. Without it we have no knowledge about your environment, we don't know what is working and what is not and what did you configure and where in an attempt to make it work and everybody involved will only waste time ping-ponging questions trying to understand your issue.
Hi ,   Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? : 3rd party (Qradar) You say you can see the data on your SH, when you search it plea... See more...
Hi ,   Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? : 3rd party (Qradar) You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH?;  Indexers How have you configured the connectivity to the 3rd party?:; yes its forwarding other syslogs successfully
Hi @randoj !  We just created a lookup definition manually in a local/transforms.conf, as you would with any other KV Store lookup. Additionally, we needed to do the same for the mc_incidents col... See more...
Hi @randoj !  We just created a lookup definition manually in a local/transforms.conf, as you would with any other KV Store lookup. Additionally, we needed to do the same for the mc_incidents collection, as it is needed to correlate notable_ids and incident_ids, the latter of which are used in mc_notes. It probably is easier to access the collections using the Python SDK and scripts, but this solution worked for us and required less setup. Hope this helps!
Can someone please guide me on this.  
Hi @malisushil119  To ensure we can answer thoroughly, please could you confirm a few things. Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? ... See more...
Hi @malisushil119  To ensure we can answer thoroughly, please could you confirm a few things. Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party?  You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH? How have you configured the connectivity to the 3rd party? Please could you check your _internal logs for any TcpOutputFd errors (assuming standard Splunk2Splunk forwarding).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing