All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It was easy enough to test the theory by just using an eval to set the recipients list index="_internal" earliest="-1s@s" latest="@s" | eval recipients="employee1@mail.se,employee2@mail.se" Then u... See more...
It was easy enough to test the theory by just using an eval to set the recipients list index="_internal" earliest="-1s@s" latest="@s" | eval recipients="employee1@mail.se,employee2@mail.se" Then using the results.recipients token for the alert, no problem. However, trying to get the field assigned using a macro, well not so easy. So I figured I'd try the lookup approach, this works but as I'm not strictly doing a "lookup" i had some problems getting this to work. The best I came up with was this: | inputlookup email-recipients.csv | eval recipients = email-adresses | append [search index="_internal" earliest="-1s@s" latest="@s" | head 1] "email-adresses" is a comma separated list of email adresses in the lookup (just like above, one long line). Though here I have to lead with "inputlookup" which makes the "search" part a bit less aesthetically pleasing and the append produces some really nasty looking output. I have "a" solution, though as I am often "doing it wrong" I thought I'd ask if there were any suggestions on how to improve in this solution.
No, the view you have is a table view and table views have unique column names, and do not have additional headers such as month as you have shown in your graphic.
I don t know if it is a bug.  but from the documentation (https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/PropagateSHCconfigurationchanges) , I understand if you want to change behavior... See more...
I don t know if it is a bug.  but from the documentation (https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/PropagateSHCconfigurationchanges) , I understand if you want to change behavior specifically for an app, the settings must be in the local/app.conf (app setting) not default/app.conf (global setting)   (optional) Set the deployer push mode for one app only in the [shclustering] stanza in $SPLUNK_HOME/etc/shcluster/apps/<app>/local/app.conf for that specific app. You might need to add the [shclustering] stanza to the app.conf file if it is not already present. For example:
 Hi, Isn´t possible with join of chart?
This is not possible with a single standard visualisation - you could have multiple panels with a different month in each panel.
Hi this sounds like a bug! What I have assumed is that app.conf is only for that app not for a global settings unless it has said other way!  You should raise a support case about this and if need... See more...
Hi this sounds like a bug! What I have assumed is that app.conf is only for that app not for a global settings unless it has said other way!  You should raise a support case about this and if needed ask that docs will be updated it which is the planned behaviour. r. Ismo
Hi  I investigate more on my case. I find why the changes were not propagate to the search head cluster.  I install on my deployer in the apps directory,  an add-on to backup config files with sche... See more...
Hi  I investigate more on my case. I find why the changes were not propagate to the search head cluster.  I install on my deployer in the apps directory,  an add-on to backup config files with scheduling. This addon has a default/app.conf file whith shclustering stanza like this : [shclustering] # full as local changes are pushed via deployer and we want to preserve them deployer_push_mode = full # lookups from deployer priority deployer_lookups_push_mode = always_overwrite   This setting changes  the default push mode for all deployment with the deployer. So I decide to remove this app from the deployer. And now all lookups are preserved and all config files in the local directory pushed by the deployer as merged in the default directory on the member cluster as expected from the documentation. 
Hi you could start with license usage GUI and open that search and then modify it's earliest attribute. At least I have this information on Settings -> License -> Usage Report -> Previous 60 days. ... See more...
Hi you could start with license usage GUI and open that search and then modify it's earliest attribute. At least I have this information on Settings -> License -> Usage Report -> Previous 60 days.  Here is SPL from there index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-60d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-60d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] You must run this on node where you have your license or update that `set_local_host` part correctly.  It seems that at least 9.1.2 this is broken. Dashboard said that it is from last 60d, but in SPL it was only last 30d! r. Ismo
Hi. Colleagues. Somebody help me? I have this query by current day (figure 1) index=xxxx sourcetype=xxx earliest=-d@d latest=now |table control indicador cumplimiento| sort control |chart values... See more...
Hi. Colleagues. Somebody help me? I have this query by current day (figure 1) index=xxxx sourcetype=xxx earliest=-d@d latest=now |table control indicador cumplimiento| sort control |chart values(cumplimiento) over control by indicador the fields are: Empleo, Huérfanas, Uso, Control _time and Month My problem is, I need also show data as showing in figure 2 (by month), but I don´t find the way to show as the figure (with month on top)    Somebody were a similar problem?
Hi This said that small bucket count in _internal is 4, which is not so high.  Have you any reason why this has happened e.g. some reboot service/server or other reason why those buckets has roll o... See more...
Hi This said that small bucket count in _internal is 4, which is not so high.  Have you any reason why this has happened e.g. some reboot service/server or other reason why those buckets has roll over from hot to warm? Anyhow you must know why those buckets has rolled before you could fix the issue? Some possible reasons could be: reboot splunk manually rolled those bad data (e.g. time stamp issues) you reinvesting old and new log files / data at same time r. Ismo
Hi some additions to @gcusello 's answer. Usually you don't need any other roles / users on indexers than admins. And those usually only if/when there is need for CLI/REST api stuff. On Splunk Clou... See more...
Hi some additions to @gcusello 's answer. Usually you don't need any other roles / users on indexers than admins. And those usually only if/when there is need for CLI/REST api stuff. On Splunk Cloud you cannot have any roles/users on indexers.  In Splunk all access to data will given by users/roles which are defined on SH side not on IDX side! When you want to use same roles (and actually always) you should use conf files in separate app, never use GUI for managing those. Even better if you can manage those users / role name as AD users and groups which are bind to splunk roles in separate app's auth*.conf files. Here is conf prensetation for RBAC which is good to read before going forward https://conf.splunk.com/watch/conf-online.html?search.event=conf23&search=PLA1169B#/ r. Ismo
Hi It's just like @richgalloway said, there is no supported way how anyone can do it. You could try the next if it works, BUT do it by your own risk/responsibility! I haven't test this! When you a... See more...
Hi It's just like @richgalloway said, there is no supported way how anyone can do it. You could try the next if it works, BUT do it by your own risk/responsibility! I haven't test this! When you are reading that splunk docs it said that it don't convert existing buckets to clustered when you add an old individual indexer to cluster. Based on that you could try (IF You have TEST environment to do it) to emulate this behaviour as moving those buckets to your cluster. BUT it needs at least the next: You cannot have the same name for any clustered index what you are moving (eg. any internal indexes) You cannot create those new buckets as a clustered buckets If you cannot fulfil above, then you cannot try the next! Instead of you should ask help from Splunk Professional Services! Stop old indexer  transfer wanted buckets from source peer to individual target node (not _* buckets can moved or any already clustered indexed with same names!)  Copy $SPLUNK_DB/<index name>.dat file from source to target peer for every moved index on every indexer which have that index Update CM's indexes conf with those new indexes. Be sure that those new indexes have attribute repFactor = 0 not as auto! Apply a new indexes conf on your CM to your cluster. Test, test, test. If this fails you will be in situation where your cluster is probably down. To fix that you probably must manually edit indexes.conf files to remove those index configurations from it. It's also needed to remove from CM and do a new apply. But as I said. You must test this first on your test environment, then ensure that those requirements are fulfil and reserve some time for possible downtime when you doing that on production. And once again, You will do it by your own risk and responsibility!!! r. Ismo
Hello @SplunkExplorer, You can switch off the xml mode event logs using the following configuration in inputs.conf in the Splunk Add-on for Microsoft Windows. [WinEventLog://Application] renderXml ... See more...
Hello @SplunkExplorer, You can switch off the xml mode event logs using the following configuration in inputs.conf in the Splunk Add-on for Microsoft Windows. [WinEventLog://Application] renderXml = false [WinEventLog://Security] renderXml = false   --- If the above solution is helpful, an upvote is appreciated.  
Hi when you said that they have old JS in use, how that JS has implemented? Is it on separate package on your app or inline in your dashboard? Have you also update your app.conf's version and build... See more...
Hi when you said that they have old JS in use, how that JS has implemented? Is it on separate package on your app or inline in your dashboard? Have you also update your app.conf's version and build attributes? r. Ismo
Hi you should found answer to your issue from this conf presentation. https://conf.splunk.com/watch/conf-online.html?search.event=conf23&search=PLA1169B#/ r. Ismo
Hi as other already said when you inherited roles which have some index access and especially when they have disallow etc. you are in deep s...t when you try to guess what are the final combined acc... See more...
Hi as other already said when you inherited roles which have some index access and especially when they have disallow etc. you are in deep s...t when you try to guess what are the final combined access.  Usually you cannot do it that way! On last conf there was excellent presentation about how to manage RBAC on splunk. https://conf.splunk.com/watch/conf-online.html?search.event=conf23&search=PLA1169B#/ You should read & listen that presentation and after that you are understanding better that problematic and how to fix it. r. Ismo
Hi IMHO: even it said in docs that you must use sudo when you use "splunk start/stop/restart" it don't say that you should use that instead of use "systemctl  start/stop/restart Splunkd"! This shoul... See more...
Hi IMHO: even it said in docs that you must use sudo when you use "splunk start/stop/restart" it don't say that you should use that instead of use "systemctl  start/stop/restart Splunkd"! This should said more clearly here. Definitely docs feedback is needed. After you have configured systemd into use you should use only it not directly splunk start/stop/restart. Only what you are needed is "splunk offline" when you are stopping one node on indexer cluster. All other action should use "systemctl start/stop/restart Splunkd"! r. Ismo Doc feedback has left.
OK It sounds like you have a browser that is periodically testing and this is what you want to alert on? You probably need to convert this to an automated test which executes somewhere and produces ... See more...
OK It sounds like you have a browser that is periodically testing and this is what you want to alert on? You probably need to convert this to an automated test which executes somewhere and produces a log or successes and failures, and then ingest this log into Splunk so you can monitor it and raise alerts if the success rate falls below 98%.
I'm not sure about the alerts. For the events I have an uptime event set up for my browser test and that's all. Im sorry if I'm not making much sense also, I'm new to splunk.
I using the OpenTelemetry Collector to receive and export logs to my Splunk Cloud Instance. I have a AWS lambda which polls data and runs a OpenTelemetry Lambda layer which receives the logs in the ... See more...
I using the OpenTelemetry Collector to receive and export logs to my Splunk Cloud Instance. I have a AWS lambda which polls data and runs a OpenTelemetry Lambda layer which receives the logs in the OTLP format and exports it to Splunk cloud Instance using HEC exporter. Below is the configurations for otel receivers: otlp: protocols: http: exporters: splunk_hec: token: ${SPLUNK_TOKEN} endpoint: ${HEC_ENDPOINT} # Source. See https://docs.splunk.com/Splexicon:Source source: "otel" # Source type. See https://docs.splunk.com/Splexicon:Sourcetype sourcetype: "otel" service: pipelines: logs: receivers: [otlp] exporters: [splunk_hec] Now, the problem is the splunk_hec exporter fails to send the logs to my splunk cloud Instance. I get the below errors max elapsed time expired Post "https://inputs.prd-p-gxyqz.splunkcloud.com:8088/services/collector/event": EOF max elapsed time expired Post "https://inputs.prd-p-gxyqz.splunkcloud.com:8088/services/collector/event": context deadline exceeded Now can you please help me identify the issue. Also, what exactly should be my HEC Endpoint URL? The documentation says the format should be <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> But the above format doesn't work.