All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As usual, there might probably be more than one solution to a problem (in your case - ingestion of Azure Firewall logs). True, Event Hub will give you a near-realtime (it's not strictly realtime sinc... See more...
As usual, there might probably be more than one solution to a problem (in your case - ingestion of Azure Firewall logs). True, Event Hub will give you a near-realtime (it's not strictly realtime since it's pull-based as far as I remember) but the storage-based method might be cheaper and if you're ok with the latency it might be sufficient. Your original problems were most probably caused by misconfigured sourcetype. The input data was not broken into events properly and/or the events were to long and got truncated. As a result json extractions didn't happen because the events were not well-formed jsons.
 I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the... See more...
 I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Boundary condition for determining which string to match: All the events that have "error" keyword in the log statement
Thanks for looking into it. I think your solution will work if there is any specific set of errors but in my case there is no specific list of errors. Errors are from different logs with different l... See more...
Thanks for looking into it. I think your solution will work if there is any specific set of errors but in my case there is no specific list of errors. Errors are from different logs with different logging format
Not by lowering the fonts, but by using special unicode characters that look like ampersands but are not treated as ampersands. Try copying and pasting the ampersand-like characters from my post.
I'm not sure if that URL is correctly written in your post, but the 8000 is a port, not a part of the path. E.g. "https://my_sh:8000/saml/logout" If that is not the issue, could you try searching yo... See more...
I'm not sure if that URL is correctly written in your post, but the 8000 is a port, not a part of the path. E.g. "https://my_sh:8000/saml/logout" If that is not the issue, could you try searching your internal logs for keywords like "Saml" or "samlresponse"? Perhaps there will be a more detailed error message. index=_internal SamlResponse  
In metrics finder we can't see anything about the service.name. As far as I know my PHP is using zero code instrumentation. I have generated a lot of traffic and nothing. I performed a purchase,... See more...
In metrics finder we can't see anything about the service.name. As far as I know my PHP is using zero code instrumentation. I have generated a lot of traffic and nothing. I performed a purchase, items in a cart and etc. Nothing of these spans are being reflected into O11y Cloud. I don't about splunk HEC. I'm just starting using Splunk O11y. I need to study more about the http event collector.    
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted t... See more...
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted the below URL in logout in SAML configuration  " https://my_sh:8000/saml/logout" how can I overcome this issue??
Splunk Support Update: Regarding your question about the best way to ingest Azure Firewall logs into Splunk, I would recommend using Event Hub for this purpose. Event Hub allows you to stream real-t... See more...
Splunk Support Update: Regarding your question about the best way to ingest Azure Firewall logs into Splunk, I would recommend using Event Hub for this purpose. Event Hub allows you to stream real-time data, which is ideal for continuous log ingestion. On the other hand, using Storage Blob as an input can lead to delays, especially as log sizes increase, and could also result in data duplication.
In a past environment I ran isolated HF's specifically for HEC with no other purpose.  I was able to tune them up on HEC processing cause there was no conflicting use cases for the compute power.  I ... See more...
In a past environment I ran isolated HF's specifically for HEC with no other purpose.  I was able to tune them up on HEC processing cause there was no conflicting use cases for the compute power.  I had 2 sites with 2 HF's per site all acting in full HA behind site LB and local LB configurations.  The HF's were pointed to the Deployment Servers so the HEC inputs and config could be updated in a central location with auto distribution. To size up I would only have to stand up another HF in either location one at a time or in bulk.  Have them point to the DS and confirm the local logs were found at the indexers.  Then I could add the new server addresses to the server class for the HEC inputs/config to upload.  Then update the LB to include the new server(s) in the pool for that particular site. Very convenient and not difficult once setup - although not for novice users though.  Plugging an HF to a DS can come with issues if not 100% aware of how apps are named.
The messages in your /var/log/syslog appear to be a separate issue. If you want to collect logs, you can either configure your "splunk_hec" exporter to use your own Splunk HEC endpoint and token or y... See more...
The messages in your /var/log/syslog appear to be a separate issue. If you want to collect logs, you can either configure your "splunk_hec" exporter to use your own Splunk HEC endpoint and token or you can disable logs for now by removing "splunk_hec" from your logs pipeline (service->pipelines->logs->exporters [remove splunk_hec]) I'm wondering if you are getting APM data into O11y Cloud but perhaps aren't generating traffic that creates spans? Can you be sure to generate some test traffic in your app that will definitely create a span? Something that calls another app or API would be ideal. You can also look for clues in the Metric Finder. Search there for the service.name you defined in your instrumentation and see if you're getting any metrics for that service.name. 
Did you use the Windows uninstaller to remove the programs?  That's the preferred method on Windows systems. It's not necessary to install both Splunk Enterprise and a Universal Forwarder (UF) on th... See more...
Did you use the Windows uninstaller to remove the programs?  That's the preferred method on Windows systems. It's not necessary to install both Splunk Enterprise and a Universal Forwarder (UF) on the same system.  The installers don't event allow it, IIRC.  Everything the UF can do Splunk Enterprise can do.
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers outpu... See more...
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers output error and "This version Splunk Enterprsise has already been installed in this PC". I tried deleting registory editor and program files of Splunk and UniversalFowarder,  run command "sc delete Splunk" in cmd. But installer's output is same. If you know this troubleshooting, please tell me.
Using map for this would be extremely ineffective
+1 on that. While an input/parsing HF layer isn't exactly SVA I like this approach because it isolates inputs and index-time parsing settings from the indexers. So the only maintenance you need to do... See more...
+1 on that. While an input/parsing HF layer isn't exactly SVA I like this approach because it isolates inputs and index-time parsing settings from the indexers. So the only maintenance you need to do on the indexers is strictly about indexes and storage.
This is /var/log/syslog. It's a loop this log. Always the same Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:... See more...
This is /var/log/syslog. It's a loop this log. Always the same Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:535 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).sendItems Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:261 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).startLoop Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:221 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:27.551+0100#011warn#011batchprocessor@v0.113.0/batch_processor.go:263#011Sender failed#011{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending queue is full"} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.109+0100#011info#011internal/retry_sender.go:126#011Exporting failed. Will retry the request after interval.#011{"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "HTTP 404 \"Not Found\"", "interval": "44.55953462s"} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.555+0100#011error#011internal/base_exporter.go:130#011Exporting failed. Rejecting data.#011{"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "sending queue is full", "rejected_items": 77} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*BaseExporter).Send Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/exporter@v0.113.0/exporterhelper/internal/base_exporter.go:130 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsRequest.func1 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/exporter@v0.113.0/exporterhelper/logs.go:138 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/consumer@v0.113.0/logs.go:26 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/splunkhecexporter.(*perScopeBatcher).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011github.com/open-telemetry/opentelemetry-collector-contrib/exporter/splunkhecexporter@v0.113.0/batchperscope.go:50 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: github.com/open-telemetry/opentelemetry-collector-contrib/pkg/batchperresourceattr.(*batchLogs).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011github.com/open-telemetry/opentelemetry-collector-contrib/pkg/batchperresourceattr@v0.113.0/batchperresourceattr.go:172 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/internal/fanoutconsumer@v0.113.0/logs.go:73 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/processorhelper.NewLogs.func1 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor@v0.113.0/processorhelper/logs.go:66 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/consumer@v0.113.0/logs.go:26 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:535 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).sendItems Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:261 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).startLoop Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:221 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.555+0100#011warn#011batchprocessor@v0.113.0/batch_processor.go:263#011Sender failed#011{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending queue is full"} Dec 14 10:19:35 PCWIN11-LPOLLI systemd-resolved[121]: Clock change detected. Flushing caches.   This is /var/log/dmesg. Not so helpful. [ 0.537467] kernel: Adding 4194304k swap on /dev/sdb. Priority:-2 extents:1 across:4194304k [ 1.252079] kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x25a399d04c4, max_idle_ns: 440795206293 ns [ 1.256080] kernel: clocksource: Switched to clocksource tsc [ 2.039875] kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 2.700859] kernel: hv_pci 1da5db4a-ecd2-4832-aa94-05f9e3555f64: PCI VMBus probing: Using version 0x10004 [ 2.703245] kernel: hv_pci 1da5db4a-ecd2-4832-aa94-05f9e3555f64: PCI host bridge to bus ecd2:00 [ 2.703785] kernel: pci_bus ecd2:00: root bus resource [mem 0xc00000000-0xe00001fff window] [ 2.704246] kernel: pci_bus ecd2:00: No busn resource found for root bus, will use [bus 00-ff] [ 2.705198] kernel: pci ecd2:00:00.0: [1af4:105a] type 00 class 0x088000 [ 2.707331] kernel: pci ecd2:00:00.0: reg 0x10: [mem 0xe00000000-0xe00000fff 64bit] [ 2.709397] kernel: pci ecd2:00:00.0: reg 0x18: [mem 0xe00001000-0xe00001fff 64bit] [ 2.711072] kernel: pci ecd2:00:00.0: reg 0x20: [mem 0xc00000000-0xdffffffff 64bit] [ 2.714335] kernel: pci_bus ecd2:00: busn_res: [bus 00-ff] end is updated to 00 [ 2.714818] kernel: pci ecd2:00:00.0: BAR 4: assigned [mem 0xc00000000-0xdffffffff 64bit] [ 2.716490] kernel: pci ecd2:00:00.0: BAR 0: assigned [mem 0xe00000000-0xe00000fff 64bit] [ 2.718167] kernel: pci ecd2:00:00.0: BAR 2: assigned [mem 0xe00001000-0xe00001fff 64bit] [ 2.727428] kernel: virtiofs virtio1: Cache len: 0x200000000 @ 0xc00000000 [ 2.789025] kernel: memmap_init_zone_device initialised 2097152 pages in 20ms [ 2.800876] kernel: FS-Cache: Duplicate cookie detected [ 2.803571] kernel: FS-Cache: O-cookie c=00000005 [p=00000002 fl=222 nc=0 na=1] [ 2.804522] kernel: FS-Cache: O-cookie d=000000003e7c27de{9P.session} n=0000000049c81f03 [ 2.805210] kernel: FS-Cache: O-key=[10] '34323934393337353730' [ 2.806200] kernel: FS-Cache: N-cookie c=00000006 [p=00000002 fl=2 nc=0 na=1] [ 2.806933] kernel: FS-Cache: N-cookie d=000000003e7c27de{9P.session} n=00000000ccf62711 [ 2.807563] kernel: FS-Cache: N-key=[10] '34323934393337353730' [ 3.062342] kernel: scsi 0:0:0:2: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 3.115930] kernel: sd 0:0:0:2: Attached scsi generic sg2 type 0 [ 3.216153] kernel: sd 0:0:0:2: [sdc] 2147483648 512-byte logical blocks: (1.10 TB/1.00 TiB) [ 3.260507] kernel: sd 0:0:0:2: [sdc] 4096-byte physical blocks [ 3.265571] kernel: sd 0:0:0:2: [sdc] Write Protect is off [ 3.273898] kernel: sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00 [ 3.275831] kernel: sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 3.284253] kernel: sd 0:0:0:2: [sdc] Attached SCSI disk [ 3.297655] kernel: EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered. Quota mode: none.[ 3.862735] kernel: misc dxg: dxgk: dxgkio_is_feature_enabled: Ioctl failed: -22 [ 3.874190] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.877247] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.879783] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.881239] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 4.763755] unknown: WSL (2) ERROR: WaitForBootProcess:3352: /sbin/init failed to start within 10000 [ 4.763760] unknown: ms [ 4.781653] unknown: WSL (2): Creating login session for luizpolli If you need something. Let me know.
What problem are you trying to solve?  What is meant by "total of messages on a source"?
Consider using the Least Privilege feature.  It allows a forwarder to read any file on the system.  See https://www.splunk.com/en_us/blog/learn/least-privilege-principle.html and https://docs.splunk.... See more...
Consider using the Least Privilege feature.  It allows a forwarder to read any file on the system.  See https://www.splunk.com/en_us/blog/learn/least-privilege-principle.html and https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installleastprivileged
I need to replace the command wc-l because I want to saw a dashboard of the total of messages on a source.
well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then ... See more...
well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then want to use one of the fields from this query in the search paramaters for a second query and return the result as an additional column: Query 1 index=indexA source=/dir1/dir2/*/*/file.txt |rex field=source "\/dir1\/dir2\/(?<variableA>.+?(?=\/))\/(?<variableB>.+?(?=\/)).*" |table variableA, variableB this will give me 1000 events Query 2 index=indexA source=/dir1/dir2/$variableA$/$variableB$/file2.txt |rex field=_raw "(?<variableC>.+?(?=\/))*" this will give me one event i then want my table to be variableA, variableB, variableC where variableC is the same for each of the 1000 events returned from Query 1    
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $... See more...
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $i; sed -e 's/<your old index>/<your new index>/g' -I.backup $i ;done Check sed's parameters and also test this first!!!! You will run this by your own responsibility without any guarantees!