All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there any way in Splunk to allow an app which creates an index be the only app that's allowed to change permissions for it? Say I have an app: app 1 which has an index index1 and the app has an... See more...
Is there any way in Splunk to allow an app which creates an index be the only app that's allowed to change permissions for it? Say I have an app: app 1 which has an index index1 and the app has an authorize.conf with the following stanza [role_special_user] srchIndexesAllowed = index1 What's to stop someone uploading a new app with their own authorize.conf to grant them access to my suposidly secure index [role_user] srchIndexesAllowed = index1 Our platform team is not necesserily allowed to see the data in the indexes we have, but they need to be able to administer Splunk including adding applications etc. How should I be correctly implementing access controls or is this just not possible in Splunk?
I have a table with 6 columns of information that I am trying to filter dynamically: date_time src_MAC dst_MAC src_IP dst_IP protocol I have no problem setting the table up, but I would like u... See more...
I have a table with 6 columns of information that I am trying to filter dynamically: date_time src_MAC dst_MAC src_IP dst_IP protocol I have no problem setting the table up, but I would like user to filter the information dynamically across all the fields using dropdown or text box inputs. I have been able to filter on a single field using tokens, but when I try with multiple fields it breaks the table (each row consists of a single event, with multiple tokens it breaks the event) . Thank you for any help you can provide. Example: date_time src_MAC dst_MAC src_IP dst_IP protocol 2015-04-18 18:57:55.042547 ff:ff:ff:ff:ff:ff 78:24:af:43:0c:75 Actionte_25:fc:ff ASUSTekC_43:0c:75 ARP
Hi, I have uploaded the data to splunk, but while searching the data doesnt appear, I have shared the screenshots as well. Can you please help here. Index used - default log file type - .log ... See more...
Hi, I have uploaded the data to splunk, but while searching the data doesnt appear, I have shared the screenshots as well. Can you please help here. Index used - default log file type - .log search criteria - all time Splunk version of docker - store/splunk/splunk:7.3
docker container for Splunk exits with below error on restart (it runs fine as long as I keep it up). I was designing Splunk indexer cluster with one master and 2 indexer containers, master containe... See more...
docker container for Splunk exits with below error on restart (it runs fine as long as I keep it up). I was designing Splunk indexer cluster with one master and 2 indexer containers, master container is starting fine, however 2 indexer nodes are failing, while restarting with below error. all three containers accept traffic from outside on different ports 8000, 8001, 8002 TASK [splunk_common : Start Splunk via cli] ************************************ fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["/opt/splunk/bin/splunk", "start", "--accept-license", "--answer-yes", "--no-prompt"], "delta": "0:05:20.859094 ", "end": "2020-04-18 09:15:03.654801", "msg": "non-zero return code", "rc": 1, "start": "2020-04-18 09:09:42.795707", "stderr": "\n\nBypassing local license checks since this instance is configured with a remote license master.", "stderr_lines": ["", "", "Bypassing local license checks since this instance is configured with a remote license master ."], "stdout": "splunkd 268 was not running.\nStopping splunk helpers...\n\nDone.\nStopped helpers.\nRemoving stale pid file... done.\n\nSplunk> Winning the War on Error\n\nCh ecking prerequisites...\n\tChecking http port [8000]: open\n\tChecking mgmt port [8089]: open\n\tChecking appserver port [127.0.0.1:8065]: open\n\tChecking kvstore port [8191] : open\n\tChecking configuration... Done.\n\tChecking critical directories...\tDone\n\tChecking indexes...\n\t\tValidated: _audit _internal _introspection _telemetry _thefishb ucket history main summary\n\tDone\n\tChecking filesystem compatibility... Done\n\tChecking conf files for problems...\n\tDone\n\tChecking default conf files for edits...\n\t Validating installed files against hashes from '/opt/splunk/splunk-7.3.0-657388c7a488-linux-2.6-x86_64-manifest'\n\tAll installed files intact.\n\tDone\n\tChecking replication _port port [8050]: open\nAll preliminary checks passed.\n\nStarting splunk server daemon (splunkd)... \nDone\n\n\nWaiting for web server at http://127.0.0.1:8000 to be availa ble............................................................................................................................................................................ ................................................................................................................................\n\nWARNING: web interface does not seem to be available!", "stdout_lines": ["splunkd 268 was not running.", "Stopping splunk helpers...", "", "Done.", "Stopped helpers.", "Removing stale pid file... done.", "", "Splunk> W inning the War on Error", "", "Checking prerequisites...", "\tChecking http port [8000]: open", "\tChecking mgmt port [8089]: open", "\tChecking appserver port [127.0.0.1:8065 ]: open", "\tChecking kvstore port [8191]: open", "\tChecking configuration... Done.", "\tChecking critical directories...\tDone", "\tChecking indexes...", "\t\tValidated: _au dit _internal _introspection _telemetry _thefishbucket history main summary", "\tDone", "\tChecking filesystem compatibility... Done", "\tChecking conf files for problems..." , "\tDone", "\tChecking default conf files for edits...", "\tValidating installed files against hashes from '/opt/splunk/splunk-7.3.0-657388c7a488-linux-2.6-x86_64-manifest'", "\tAll installed files intact.", "\tDone", "\tChecking replication_port port [8050]: open", "All preliminary checks passed.", "", "Starting splunk server daemon (splunkd)... ", "Done", "", "", "Waiting for web server at http://127.0.0.1:8000 to be available........................................................................................... ............................................................................................................................................................................... ..................................", "", "WARNING: web interface does not seem to be available!"]} PLAY RECAP ********************************************************************* localhost : ok=18 changed=1 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0
How do you handle the fact that apps like Splunk_TA_nix and Splunk_TA_windows have relative paths like [script://./bin/df.sh] that will not resolve correctly when deployed by the Cluster Master... See more...
How do you handle the fact that apps like Splunk_TA_nix and Splunk_TA_windows have relative paths like [script://./bin/df.sh] that will not resolve correctly when deployed by the Cluster Master via master-apps to slave-apps on the indexer and results in failure to run and errors like this: 04-18-2020 18:07:11.694 -0400 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/Splunk_TA_nix/bin/df.sh" /bin/sh: /opt/splunk/etc/apps/Splunk_TA_nix/bin/df.sh: No such file or directory What compounds it is that we also send these same apps to our UFs where they work fine as-is. Obviously the problem is that the relative path resolution code in splunkd is hard-coded to use $SPLUNK_HOME/etc/apps and with cluster master the apps are in $SPLUNK_HOME/etc/slave-apps/ . It looks like Splunk may never fix it to be smarter, so we have to accommodate both ways. We are looking for the most portable and lightweight method. I can think of (and have tried) at least 3 ways but I don't really like any of them. What do you do? Is there any way to use the same inputs.conf file for Clustered Indexers and other nodes?
Hello Splunkers, Hope Everyone doing good. I am for spl to create alert whenever a server class/apps gets updated by deployment server? Could anyone please help ??
Hello, We're planning a capacity adjustment activity (resize of C: drives). Our SplunkForwarders are installed on the C: drives. If the disk were to become unavailable for a period of time, does... See more...
Hello, We're planning a capacity adjustment activity (resize of C: drives). Our SplunkForwarders are installed on the C: drives. If the disk were to become unavailable for a period of time, does this affect the SplunkForwarder in any way? Any action needs to be taken after the disk is available again (service recycle etc.)? Thanks a bunch! Have a great day.
As of now, we use CSV lookups but some of the lookups are around 2 GB which is creating a problem in SH replication. As far as I know, in case of any change in CSV lookup, a new CSV is created and ... See more...
As of now, we use CSV lookups but some of the lookups are around 2 GB which is creating a problem in SH replication. As far as I know, in case of any change in CSV lookup, a new CSV is created and this is considered as a new object and that's why this complete CSV gets replication to other SH. I mean to say diff is not replication in case of CSV lookups. But in the KV store only transactions related to "changed rows" will be replicated so I think moving to KV store should solve the SH replication issue. I know there might be an issue related to opslog size, but its limit is 1 GB and it would contain only transactions (for changed rows) so overall size will be much less compared to complete CSV lookup. To be on the safe side I will increase its size to 1.5 GB. So, moving to KV store will solve the SH replication problem until lots of transactions fill opslog file, which is very rare (given that only changes get replicated). Am I right?
I am very new with Splunk. I started lerning it with on line courses. I need to configure Forwarding in heavy forwarder. Here are the steps: Configure Forwarding -- Forward Data -- New Forwardi... See more...
I am very new with Splunk. I started lerning it with on line courses. I need to configure Forwarding in heavy forwarder. Here are the steps: Configure Forwarding -- Forward Data -- New Forwarding Host: insert hostname:port or IP:port But I do not know how to find the IP:port Can anyone help me?
I have an interesting problem which I can’t work out with the AWS-TA specifically for an S3 input. I am collecting CloudTrail logs from an S3 bucket (no SQS, because the existing environment was p... See more...
I have an interesting problem which I can’t work out with the AWS-TA specifically for an S3 input. I am collecting CloudTrail logs from an S3 bucket (no SQS, because the existing environment was preconfigured) I am collecting the logs with a Generic S3 input, but I have limited the collection window to only events that have been written to S3 in the last week (ish). As an aside, the bucket is already lifecycle managed and only has 90 days of logs within it. I am running the TA on an AWS HF instance with the necessary IAM roles, and data is collected, however.. there is a significant delay between logs being written to S3, collected and indexed. After some investigation, I have discovered that the HF is chewing through its entire Swap disk, whilst the physical (well, virtual) memory is max ~4gb of total available. I have been debugging this issue on a variety of ec2 instance types (c5.4xl/2xl/xl) and the system memory/core count has no impact on the behaviour. The behaviour of this was such that the swap file located on the / volume would be heavily written to, this is in turn would eat the available IOPS on the volume, which when depleted cause high IOwait and eventually the system becomes unresponsive. To check if this was a problem with the process needing a high amount of virt. mem for the initial ‘read’ I have moved the instance to an m5d, which provides an ephemeral 140GB ssd, which I have used as a dedicated swap device as it is not IOP limited (except by the underlying hardware device). As predicted, this has solved the IOPS depleting and the iowait condition is prevented. The box has a steady load avg of ~3 (yes I know it’s only got 4 cores) but is otherwise quite happy. However, the S3 python process has consumed 124GB of Virtual Memory, of which 123GB is on Swap. I have never seen anything like this before with this TA across many deployments on AWS. The logs from the TA report nothing untoward in the relevant log file, and whilst CT logs are getting in, they are taking 2-4 hours to arrive, rather than 30mins as configured in the input. I have dumped the /swap partition with strings , and I can see that the swap file contains the data from the CT log files read from s3, so my present assumption is that the entire buckets log files are being read into swap (multiple times, as there is only 7-8GB of logs in the bucket), it seems that once swap is full, the oldest logs are evicted from swap, and the HF finally processes them, and sends them to the indexers. Side note – if I run the HF without swap, it gets oomkilled and restarts. No crashlog is generated. Even so, physical memory usage never peaks at more than 3GB. Does anyone have any ideas what could be up?
Hi Team, I got two field values: field1=xyz.com; field2=abc.xyz.com Now i want to compare these two values either with search command or where command where my expected results is, I dont ... See more...
Hi Team, I got two field values: field1=xyz.com; field2=abc.xyz.com Now i want to compare these two values either with search command or where command where my expected results is, I dont want output because field1 which contains xyz.com; is present in field2 also. But If "field1=abc.com;" and "field2=xyz.com" where abc.com; is not equals to xyz.com then only I should get output. Note: Need to ignore semicolon (;) and comparison should be case insensitive. I tried with "where field1!=field2" and "field1=.field2." but not working Thanks in advance.
I have a sample data from email logs where we have from and message size. how can I extract "Top ten sending addresses by message size" attaching sample data snapshot.
Every time I am getting error when I am trying to run following command: curl -k https://instancename.cloud.splunk.com:8088/services/collector/event -H "Authorization: Splunk TOKEN" -d '{"event": "... See more...
Every time I am getting error when I am trying to run following command: curl -k https://instancename.cloud.splunk.com:8088/services/collector/event -H "Authorization: Splunk TOKEN" -d '{"event": "hello world"}' This is my Self-service Splunk Cloud. Even I tried with Postman with POST method and setting Authorization Key with token as Header, still getting There was an error connecting to https://instancename.cloud.splunk.com//services/collector/event.
While upgarding to the lastest version i get below error while is start the service Process: 25918 ExecStart=/opt/splunk/bin/splunk start --accept-license --answer-yes --no-prompt (code=exited, st... See more...
While upgarding to the lastest version i get below error while is start the service Process: 25918 ExecStart=/opt/splunk/bin/splunk start --accept-license --answer-yes --no-prompt (code=exited, status=2) splunkd.service: control process exited, code=exited status=2 This is what the splunkd daemon error says: An error occurred: Failed to run splunkd rest: stdout: stderr:splunkd: /opt/splunk/src/util/HttpClientRequest.cpp:1760: void HttpClientTransaction::_handleProxyConnect(): Assertion `_poolp->hasSslContext()' failed. Dying on signal #6 (si_code=-6), sent by PID 26389 (UID 1002). Attempting to clean up pidfile Anyone else seen this error before?
Hi I search exact same SPL, it run and return result with admin user, but doesn't return anything with normal user! Workaround: 1-set permission for Datamodel And Search. (Read permisison to all)... See more...
Hi I search exact same SPL, it run and return result with admin user, but doesn't return anything with normal user! Workaround: 1-set permission for Datamodel And Search. (Read permisison to all) 2-set user index (setting > Access control > role > user > index) FYI: 1-Data indexed on separate index. 2-without complete SPL I can access to the result with user like this: index="myindex" | search error* 3-inspect job : INFO UserManager - Unwound user context: myuser-> NULL Any recommendation? Thanks,
There are three conditions in my eval: 1) date=2019-Present, '"/2019","/2020"' 2) date=2019, " /2019" 3) date=2020, "/2020" Non of the condition values pass through to **OpenedOn IN(dtok)*... See more...
There are three conditions in my eval: 1) date=2019-Present, '"/2019","/2020"' 2) date=2019, " /2019" 3) date=2020, "/2020" Non of the condition values pass through to **OpenedOn IN(dtok)** as expected. In the example below, OpenedOn IN(dtok) should result in OpenedOn IN('"/2019","*/2020"')*. ..base search | eval date=2019-Present | eval dtok=case(date=2019-Present, **'"*/2019","*/2020"'** ,date=2019, " /2019", date=2020, "/2020") | search OpenedOn IN(**dtok**) | bin span=1mon OpenedOn | chart count(sys_id) as count over OpenedOn_2 by "Business Service" limit=0 | addtotals Thank you.
Hi Im a new splunk user, i have a requirement to integrate Microfocus BSM 9.24 logs into splunk. Can someone help me , how do i get started with is? Should i use BSM collector to collect all the lo... See more...
Hi Im a new splunk user, i have a requirement to integrate Microfocus BSM 9.24 logs into splunk. Can someone help me , how do i get started with is? Should i use BSM collector to collect all the logs at BSM ?? and also,by which mean , can i send the data to splunk .like 1. Universal forwarder method ,2. Through Heavy Forwarder 1. Through DB connect,4. WinScp for windows,5. HTTP event collector It will be great help, 2. List item i ll be very thankful, if someon could help me, how should i proceed.
Hi, I wonder test different pattern matching (format spl) dynamically with a field value without use the command "map" example: | makeresults | eval _raw = "foo var" | eval mymatch = "var ... See more...
Hi, I wonder test different pattern matching (format spl) dynamically with a field value without use the command "map" example: | makeresults | eval _raw = "foo var" | eval mymatch = "var OR fo*" | eval test = if(searchmatch($mymatch$),"yes","no" ) I test with macro but it doesn't work.
the search (thanks for who provided this) is: | tstats count where host=linux01 sourcetype="linux:audit" by _time span=1d prestats=t | timechart span=1d count as total | appendcols [ search ... See more...
the search (thanks for who provided this) is: | tstats count where host=linux01 sourcetype="linux:audit" by _time span=1d prestats=t | timechart span=1d count as total | appendcols [ search host=linux01 sourcetype="linux:audit" key="linux01_change" NOT comm IN (vi, rm, ls) | timechart span=1d count as filter] If there is no matched event to return for "total" and "filter", I get "Not Results Found". If there is no matched event return for "total" or "filter", I get nothing on the timechart for "total" or "filter" I would instead like a 0 displayed. Any idea will be much appreciated.
I am trying to create an alert which will check how many messages are stuck in the queue and whats the age of messages. Problem is , field for checking the number of messages and age of messages i... See more...
I am trying to create an alert which will check how many messages are stuck in the queue and whats the age of messages. Problem is , field for checking the number of messages and age of messages is same i.e. metric_dimensions. Can someone guide how can we join these two fields with same name , but different values?