All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Using map for this would be extremely ineffective
+1 on that. While an input/parsing HF layer isn't exactly SVA I like this approach because it isolates inputs and index-time parsing settings from the indexers. So the only maintenance you need to do... See more...
+1 on that. While an input/parsing HF layer isn't exactly SVA I like this approach because it isolates inputs and index-time parsing settings from the indexers. So the only maintenance you need to do on the indexers is strictly about indexes and storage.
This is /var/log/syslog. It's a loop this log. Always the same Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:... See more...
This is /var/log/syslog. It's a loop this log. Always the same Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:535 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).sendItems Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:261 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).startLoop Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:221 Dec 14 10:19:27 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:27.551+0100#011warn#011batchprocessor@v0.113.0/batch_processor.go:263#011Sender failed#011{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending queue is full"} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.109+0100#011info#011internal/retry_sender.go:126#011Exporting failed. Will retry the request after interval.#011{"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "HTTP 404 \"Not Found\"", "interval": "44.55953462s"} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.555+0100#011error#011internal/base_exporter.go:130#011Exporting failed. Rejecting data.#011{"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "sending queue is full", "rejected_items": 77} Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*BaseExporter).Send Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/exporter@v0.113.0/exporterhelper/internal/base_exporter.go:130 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsRequest.func1 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/exporter@v0.113.0/exporterhelper/logs.go:138 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/consumer@v0.113.0/logs.go:26 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/splunkhecexporter.(*perScopeBatcher).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011github.com/open-telemetry/opentelemetry-collector-contrib/exporter/splunkhecexporter@v0.113.0/batchperscope.go:50 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: github.com/open-telemetry/opentelemetry-collector-contrib/pkg/batchperresourceattr.(*batchLogs).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011github.com/open-telemetry/opentelemetry-collector-contrib/pkg/batchperresourceattr@v0.113.0/batchperresourceattr.go:172 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/internal/fanoutconsumer@v0.113.0/logs.go:73 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/processorhelper.NewLogs.func1 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor@v0.113.0/processorhelper/logs.go:66 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/consumer@v0.113.0/logs.go:26 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:535 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).sendItems Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:261 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: go.opentelemetry.io/collector/processor/batchprocessor.(*shard[...]).startLoop Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: #011go.opentelemetry.io/collector/processor/batchprocessor@v0.113.0/batch_processor.go:221 Dec 14 10:19:28 PCWIN11-LPOLLI otelcol[211214]: 2024-12-14T10:19:28.555+0100#011warn#011batchprocessor@v0.113.0/batch_processor.go:263#011Sender failed#011{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending queue is full"} Dec 14 10:19:35 PCWIN11-LPOLLI systemd-resolved[121]: Clock change detected. Flushing caches.   This is /var/log/dmesg. Not so helpful. [ 0.537467] kernel: Adding 4194304k swap on /dev/sdb. Priority:-2 extents:1 across:4194304k [ 1.252079] kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x25a399d04c4, max_idle_ns: 440795206293 ns [ 1.256080] kernel: clocksource: Switched to clocksource tsc [ 2.039875] kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 2.700859] kernel: hv_pci 1da5db4a-ecd2-4832-aa94-05f9e3555f64: PCI VMBus probing: Using version 0x10004 [ 2.703245] kernel: hv_pci 1da5db4a-ecd2-4832-aa94-05f9e3555f64: PCI host bridge to bus ecd2:00 [ 2.703785] kernel: pci_bus ecd2:00: root bus resource [mem 0xc00000000-0xe00001fff window] [ 2.704246] kernel: pci_bus ecd2:00: No busn resource found for root bus, will use [bus 00-ff] [ 2.705198] kernel: pci ecd2:00:00.0: [1af4:105a] type 00 class 0x088000 [ 2.707331] kernel: pci ecd2:00:00.0: reg 0x10: [mem 0xe00000000-0xe00000fff 64bit] [ 2.709397] kernel: pci ecd2:00:00.0: reg 0x18: [mem 0xe00001000-0xe00001fff 64bit] [ 2.711072] kernel: pci ecd2:00:00.0: reg 0x20: [mem 0xc00000000-0xdffffffff 64bit] [ 2.714335] kernel: pci_bus ecd2:00: busn_res: [bus 00-ff] end is updated to 00 [ 2.714818] kernel: pci ecd2:00:00.0: BAR 4: assigned [mem 0xc00000000-0xdffffffff 64bit] [ 2.716490] kernel: pci ecd2:00:00.0: BAR 0: assigned [mem 0xe00000000-0xe00000fff 64bit] [ 2.718167] kernel: pci ecd2:00:00.0: BAR 2: assigned [mem 0xe00001000-0xe00001fff 64bit] [ 2.727428] kernel: virtiofs virtio1: Cache len: 0x200000000 @ 0xc00000000 [ 2.789025] kernel: memmap_init_zone_device initialised 2097152 pages in 20ms [ 2.800876] kernel: FS-Cache: Duplicate cookie detected [ 2.803571] kernel: FS-Cache: O-cookie c=00000005 [p=00000002 fl=222 nc=0 na=1] [ 2.804522] kernel: FS-Cache: O-cookie d=000000003e7c27de{9P.session} n=0000000049c81f03 [ 2.805210] kernel: FS-Cache: O-key=[10] '34323934393337353730' [ 2.806200] kernel: FS-Cache: N-cookie c=00000006 [p=00000002 fl=2 nc=0 na=1] [ 2.806933] kernel: FS-Cache: N-cookie d=000000003e7c27de{9P.session} n=00000000ccf62711 [ 2.807563] kernel: FS-Cache: N-key=[10] '34323934393337353730' [ 3.062342] kernel: scsi 0:0:0:2: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 3.115930] kernel: sd 0:0:0:2: Attached scsi generic sg2 type 0 [ 3.216153] kernel: sd 0:0:0:2: [sdc] 2147483648 512-byte logical blocks: (1.10 TB/1.00 TiB) [ 3.260507] kernel: sd 0:0:0:2: [sdc] 4096-byte physical blocks [ 3.265571] kernel: sd 0:0:0:2: [sdc] Write Protect is off [ 3.273898] kernel: sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00 [ 3.275831] kernel: sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 3.284253] kernel: sd 0:0:0:2: [sdc] Attached SCSI disk [ 3.297655] kernel: EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered. Quota mode: none.[ 3.862735] kernel: misc dxg: dxgk: dxgkio_is_feature_enabled: Ioctl failed: -22 [ 3.874190] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.877247] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.879783] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 3.881239] kernel: misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 4.763755] unknown: WSL (2) ERROR: WaitForBootProcess:3352: /sbin/init failed to start within 10000 [ 4.763760] unknown: ms [ 4.781653] unknown: WSL (2): Creating login session for luizpolli If you need something. Let me know.
What problem are you trying to solve?  What is meant by "total of messages on a source"?
Consider using the Least Privilege feature.  It allows a forwarder to read any file on the system.  See https://www.splunk.com/en_us/blog/learn/least-privilege-principle.html and https://docs.splunk.... See more...
Consider using the Least Privilege feature.  It allows a forwarder to read any file on the system.  See https://www.splunk.com/en_us/blog/learn/least-privilege-principle.html and https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installleastprivileged
I need to replace the command wc-l because I want to saw a dashboard of the total of messages on a source.
well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then ... See more...
well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then want to use one of the fields from this query in the search paramaters for a second query and return the result as an additional column: Query 1 index=indexA source=/dir1/dir2/*/*/file.txt |rex field=source "\/dir1\/dir2\/(?<variableA>.+?(?=\/))\/(?<variableB>.+?(?=\/)).*" |table variableA, variableB this will give me 1000 events Query 2 index=indexA source=/dir1/dir2/$variableA$/$variableB$/file2.txt |rex field=_raw "(?<variableC>.+?(?=\/))*" this will give me one event i then want my table to be variableA, variableB, variableC where variableC is the same for each of the 1000 events returned from Query 1    
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $... See more...
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $i; sed -e 's/<your old index>/<your new index>/g' -I.backup $i ;done Check sed's parameters and also test this first!!!! You will run this by your own responsibility without any guarantees! 
Hi those error messages means that you haven't enough space on indexers as you already know and which you try to fix. Probably you have even so less free space that CM cannot push those new bundles ... See more...
Hi those error messages means that you haven't enough space on indexers as you already know and which you try to fix. Probably you have even so less free space that CM cannot push those new bundles into search peers?  You must log into those nodes or use other tools which can check the disk space situation on all those nodes. It's quite possible that you must manually delete/move some stuff away from those disk partitions to apply a new cluster bundle. But it's hard to say before we know the real situation on those search peers. btw. have you also try to apply that cluster bundle on GUI or just validate it? r. Ismo
What you are meaning with "We fail again and again"? What kind of environment you have? Distributed, separate HEC nodes with LB? Basically you could create e.g. dashboard where you are looking stat... See more...
What you are meaning with "We fail again and again"? What kind of environment you have? Distributed, separate HEC nodes with LB? Basically you could create e.g. dashboard where you are looking status information from _internal & _introspection logs. You could also create alerts based on your normal and abnormal behaviour after that. r. Ismo
You should look this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance It contains preferred Splunk architecture layouts. You should remember that if you have lot of... See more...
You should look this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance It contains preferred Splunk architecture layouts. You should remember that if you have lot of HEC inputs and you need to update/add those regularly this impacts your indexers it you are using those instead of HFs as HEC inputs. For that reason I personally prefer to use couple of HFs with LB as a HEC cluster instead of configure those directly into indexers. Here is some instructions how to tune HEC. - https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-HEC-perf-tuning-configs/m-p/601629 - https://community.splunk.com/t5/Getting-Data-In/Can-we-have-fewer-Heavy-Forwarders-than-Indexers/m-p/551485
You could check if there are additional ACL sets for that directory and especially for those files. Just make sudo to root (if possible) and then use getfacl command to look those https://www.compute... See more...
You could check if there are additional ACL sets for that directory and especially for those files. Just make sudo to root (if possible) and then use getfacl command to look those https://www.computerhope.com/unix/ugetfacl.htm How those file collections are defined in your inputs.conf? I think that with additional ACLs it's possible to define those so that you could read those files directly from that directory event you cannot cd into it.
Here is one old example which probably helps you to understand how to use it? <form version="1.1"> <label>Time Picker Control</label> <init> <set token="earliest">-24h</set> <set token="... See more...
Here is one old example which probably helps you to understand how to use it? <form version="1.1"> <label>Time Picker Control</label> <init> <set token="earliest">-24h</set> <set token="latest">now</set> </init> <fieldset submitButton="false"> <input type="time" token="time_range"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <eval token="earliest">if(relative_time</eval> </change> </input> </fieldset> <row> <panel> <title>Simple timechart</title> <chart> <title>$ranges$</title> <search> <query>index=_audit | timechart span=1h count </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>Calculation panel that limits historical range</title> <table> <search> <done> <set token="earliest">$result.earliest$</set> <set token="latest">$result.info_max_time$</set> <set token="ranges">$result.ranges$</set> </done> <query>| makeresults | addinfo | eval min_time=now()-(30*86400) | eval earliest=if(info_min_time &lt; min_time, min_time, info_min_time) | eval initial_range="Time Picker range: ".strftime(info_min_time, "%F %T")." to ".strftime(info_max_time, "%F %T") | eval limited_range="Search range ".strftime(earliest, "%F %T")." to ".strftime(info_max_time, "%F %T") | eval ranges=mvappend(initial_range, limited_range) | table ranges earliest info_min_time info_max_time </query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> I cannot remember who has present it and when, probably here or Slack?
In your case (as you have multiline and multiple events in one json file) You should use INDEXED_EXTRACTIONS=json on your UF side. So remove it from SH side. If I got this file correctly, it contain... See more...
In your case (as you have multiline and multiple events in one json file) You should use INDEXED_EXTRACTIONS=json on your UF side. So remove it from SH side. If I got this file correctly, it contains 25 events? Unfortunately I haven't suitable environment to test this from UF -> IDX -> SH, but just leave INDEXED_EXTRACTIONS on UF's props.conf (restart it after that) and remove it from SH (and IDX side if you have it also there). Then it should works. Usually props.conf should/must be on indexer or first full splunk enterprise instance from UF to IDX path. Also you could/should put it into SH when there is some runtime definitions which are needed there.  There is only some definitions which must be on UF side. This https://www.aplura.com/assets/pdf/where_to_put_props.pdf describes when and where you should put it when you are ingesting events. You could found more instructions at least lantern and docs.splunk.com. BTW why you are using jq to pretty print that json file? This add lot of additional spaces, new line characters and other unnecessary stuff on your input file. Those characters just increase your license consumption!
I'm not sure how to interpret your question.   Do you mean $t_time.latest$ comes from an input selector?( @isoutamo's link shows how to retrieve the value after a search is complete.)  For this, one ... See more...
I'm not sure how to interpret your question.   Do you mean $t_time.latest$ comes from an input selector?( @isoutamo's link shows how to retrieve the value after a search is complete.)  For this, one way to handle it is to test its value before format. index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=if("t_time.latest$" == "now", now(), relative_time($t_time.latest$,"@d")) | fields earliest latest | format] | table _time zbpIdentifier
Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions;... See more...
Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions; in this case, it is also very concise. If you remove the rest of search after that chart, you'll see something like this: _raw false true {"lastLogin":"2024-12-12T23:42:47","userPrincipalName":"yliu"}   28 {"lastLogin":"2024-12-13T00:58:38","userPrincipalName":"splunk-system-user"} 290 150 The intent is to construct a chart that will render the desired table layout while retaining all the data needed to produce final presentation. (This is why I ask for a mockup table so I know how you want to present data.  Presentation does influence solution.)
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is no... See more...
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is not supported way. Usually splunk has installed as root, but it should run as splunk (or other non root) user. Have you look what logs said especially migration.log and splunkd.log?
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-... See more...
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner]   Meaning, the user splunk runs on the host and when I sudo to be the splunk user, I don't have access the these logs files, even though they are being ingested.   
I love of the idea of the - HEC inputs directly on indexers (and have an LB in front of them) !
You have no _time in your output fields. See https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/UseSireportingcommands#Summary_indexing_of_data_without_timestamps