All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have a not-at-all overloaded ES search head with a separate volume for dispatch with plenty of room that gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100... See more...
We have a not-at-all overloaded ES search head with a separate volume for dispatch with plenty of room that gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100ish rows of results with dozensish fields with default values of "2p" for "dispatch.ttl" but the results are always gone after 2 days. We are on 7.3.latest. We have tried setting it to 2 weeks worth of seconds and that did not work. What could be causing this? What logs should I look at/for?
I want calculate the row values of every column by error message... I did | Stats count(host) values(host) values(functionality) count(functionality) values (loan_num) by error_message I'm ju... See more...
I want calculate the row values of every column by error message... I did | Stats count(host) values(host) values(functionality) count(functionality) values (loan_num) by error_message I'm just getting host count as 90 If i run the query sperately , like.. | stats count(hostcount) by hostvalues It is showing all the values in their respective columns. Let's if the host are like hosta-20 host-30 hostc-40 so i want to fetch individual details in the same above columns by error_message
I have a playbook that writes data to an index a. And I am polling events which are closed , ie, ` notable |search status="x"` and data of this event from index 'a' as well. ie, I am using a nested ... See more...
I have a playbook that writes data to an index a. And I am polling events which are closed , ie, ` notable |search status="x"` and data of this event from index 'a' as well. ie, I am using a nested query to get the data. But when I close one of the latest events, that event gets polled immediately, and after that, if I close an event older than that it is not getting polled. Have anyone faced such issue?
Hi guys, I am new to splunk. I have multiple events that looks like this: - 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1", - 2020-02-07 07:21:20 actio... See more...
Hi guys, I am new to splunk. I have multiple events that looks like this: - 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1", - 2020-02-07 07:21:20 action_time="2020-01-02 07:22:20.39", id_client="4567", ticket="2" - 2020-02-07 07:21:20 action_time="2020-01-02 07:23:20.39", id_client="1234", ticket="2" - ... I would like to see transaction like this: in All events, find the first event where id_client = "1234" and ticket="1". If match, find next event with the same id_client, but the ticket= "2". so, for the same client, find first ticket=1, following after the ticket=2 (no other actions). I tried with: ...| transaction ticket startwith='1' endwith='2' but it does not work how can we do this in splunk ? I thank you i advance,
Hi, I see a few similar questions in the past however I can't see an answer to date. Does anyone know if the splunkd process can utilise modern multicore cpu architectures? or is it bound to a... See more...
Hi, I see a few similar questions in the past however I can't see an answer to date. Does anyone know if the splunkd process can utilise modern multicore cpu architectures? or is it bound to a single thread of execution? The splunk ingest pipeline looks to me (as a non-programmer) as something that might lend itself to multi-threading. Thanks, David
Hello there! I want to add a percentage row into a chart table. string: index=smsc tag=MPRO_PRODUCTION DATA="8000000400000000" OR "8000000400000058" | dedup DATA | chart count by SHORT_ID, co... See more...
Hello there! I want to add a percentage row into a chart table. string: index=smsc tag=MPRO_PRODUCTION DATA="8000000400000000" OR "8000000400000058" | dedup DATA | chart count by SHORT_ID, command_status_code | search NOT ESME_RTHROTTLED=0 | sort - ESME_RTHROTTLED | head 15 And the chart table: The red result, is what i need to add. the Value in it should be calculated like the blue marked. ESME_RTHROTTLED value get divided by ESME_RTHROTTLED and ESME_ROK together. Can someone help me?
Hi, Does anyone happen to know if Multisite search head clustering is suppported in ES 6.x? The validated architectures document says not, but it was written in 2018. Reading the release notes of ... See more...
Hi, Does anyone happen to know if Multisite search head clustering is suppported in ES 6.x? The validated architectures document says not, but it was written in 2018. Reading the release notes of 6.0, 6.0.1 and 6.1 it sounds like there have been ajustments to the way it handles SHC knowledge objects but I don't know if this changes the advice from the validated design docs. "a single dedicated search head cluster contained within a siteis required to deploy the app . ES requires a consistent set of runtime artifacts to be available and this cannot be guaranteed in a stretched SHC when a site outage occurs. To be able to recover an ES SH environment from a site failure, 3rd party technology can be used to perform a failover ofthe search head instances, or a "warm standby" ES SH can be provisioned and kept in synch with the primary ES environment. Regards, David
Hello, I found a blog about microsoft retiring basic authentication for Exchange Online on October 13, 2020. https://developer.microsoft.com/en-us/office/blogs/end-of-support-for-basic-authentica... See more...
Hello, I found a blog about microsoft retiring basic authentication for Exchange Online on October 13, 2020. https://developer.microsoft.com/en-us/office/blogs/end-of-support-for-basic-authentication-access-to-exchange-online-apis-for-office-365-customers/ If this app uses basic authentication, the request will fail after October 13, 2020. I think this app uses basic authentication. Is there any way this app can use other authentication methods than basic authentication? splunk version : 7.3.1 app version : 1.1.0 Thanks!
I have a search query like this index=ppt sm.to{}="12-12-518@dt.com" OR sm.to{}="050920@cp.com" |table sm.to{} sm.stat and I want to use a csv lookup instead because I have more email ad... See more...
I have a search query like this index=ppt sm.to{}="12-12-518@dt.com" OR sm.to{}="050920@cp.com" |table sm.to{} sm.stat and I want to use a csv lookup instead because I have more email address to use and I want the result to show this two fields . My csv contains this sm.to{} 050920@cp.com 12-12-518@dt.com 774211@PP.com 859@dat.com 20909@PP.com 07548@pp.com Can anyone help with a lookup search query for me . thanks.
Hi all. I am struggling where should I check. I want to make splunk user automatically. so, I made this script. test.py import sys import os import request import json def test(): data ... See more...
Hi all. I am struggling where should I check. I want to make splunk user automatically. so, I made this script. test.py import sys import os import request import json def test(): data = { 'name':'username', 'password':'password', 'roles':'user'} response = request.post('https://mng_uri:8089/services/authentication/users', data=data, auth=('admin','passme')) id __name__ == "__main__": test() I can execute this scripts python test.py in my /home directory, and I can create user. so I made custom alert action. I made an alert and select this custom action, but I can not create user. I have no idea because there are no error in internal log(splunkd.log). where should I check???
Hi Guys, I am Just creating a rule for a switch for multiple nodes where if the status of the switch goes down and doesn't comes up within an hour then it has to be triggered. But also if you see ... See more...
Hi Guys, I am Just creating a rule for a switch for multiple nodes where if the status of the switch goes down and doesn't comes up within an hour then it has to be triggered. But also if you see logs the status is getting up within a fraction of sec so i just want to put a threshold of 1 hour. Kindly help me on forming the Splunk query. 2019-12-02T17:25:38.448Z x.x.x.x <45>12376292: 12377249: *Dec 2 18:14:15.138: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up 2019-12-02T17:25:38.448Z x.x.x.x <45>12376291: 12377248: *Dec 2 18:14:15.101: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to down Thanks in advance
My Cisco Indexer just stopped indexing new data. Splunk is receiving data from the Syslog server but just not getting index and so nothing is showing in the Cisco Networks apps/addon. I do have an in... See more...
My Cisco Indexer just stopped indexing new data. Splunk is receiving data from the Syslog server but just not getting index and so nothing is showing in the Cisco Networks apps/addon. I do have an input/output file on my syslog servers through UF that is monitoring the folder with the logs, which is not the problem since i can see current&old logs in the SH. The output is pointing to my HF which forwards the data to the Idexer. I'm running 8.0.1 with 1 server each SH, IDX, DP, HF. I know it's not indexing cuz my indexer haven't received data for at least a day and there are no errors in the the logs.
Our Servers are located in Private Subnets in EC2 instances on AWS. The Platform/Software that we are using is called Alfresco which produces log files named ‘alfresco.log’, which I wanted to be inge... See more...
Our Servers are located in Private Subnets in EC2 instances on AWS. The Platform/Software that we are using is called Alfresco which produces log files named ‘alfresco.log’, which I wanted to be ingested into Splunk Cloud. One is a Linux instance in a Private Subnet and the other is a Windows machine in a Private Subnet. They cannot connect to the Internet from within. They can only communicate with the Machines in their VPC. So can you kindly let us know how to send those logs to the Splunk Cloud..
I've combed through inputs.conf and the various questions on answers but can't seem to get a definitive example in how to employ a whitelist or modify my monitor stanza to match on specific folders a... See more...
I've combed through inputs.conf and the various questions on answers but can't seem to get a definitive example in how to employ a whitelist or modify my monitor stanza to match on specific folders and their sub-directories per my use case. Example: match on /mnt/data/apple/desired_folder/*/* match on /mnt/data/apple/dir_1/*/* match on /mnt/data/apple/folder_two/*/* DONT match /mnt/data/apple/junk/*/*] DONT match on too many others to list Each directory in the whitelist, has one more sub-directory, then the log files themselves, of which I want everything in the folder. Do I have to write 3 monitor stanzas for this? failed attempts - no logs get pulled in [monitor:///mnt/data/apple/(dir_1|folder_two|index_this)/*/*] and [monitor:///mnt/data/apple/*/*/*] whitelist = (dir_1|folder_two|index_this) For now I've resorted to 3 monitor stanza's but I thought there is a cleaner way to do this in Splunk that I've completely forgotten/missed.
Hi, We have been experiencing broken UI on 3 of our nodes (DS, SHDep, & IDXCM; 2 screenshots below) and the rest seems to be fine. The Web UI is not showing web objects as normal, like the dropdow... See more...
Hi, We have been experiencing broken UI on 3 of our nodes (DS, SHDep, & IDXCM; 2 screenshots below) and the rest seems to be fine. The Web UI is not showing web objects as normal, like the dropdown, apps, etc. This is not an issue on serverclass.conf (as the message on the UI is saying) because running functionalities in the background / CLI seems to be fine. Attempts to fix this one include splunk restart splunkweb and splunk restart , the former fixes the issue but the problem goes back after 15-30 minutes of not being used. We've cleared the cache of the browser but also to no avail. The version we're using is 6.5.2. Have you experienced the same? If so, how do we permanently fix this? Thanks in advance.
The smartstore documentation says the following: "The amount of local storage available on each indexer for cached data must be in proportion to the expected working set. For best results, provisi... See more...
The smartstore documentation says the following: "The amount of local storage available on each indexer for cached data must be in proportion to the expected working set. For best results, provision enough local storage to accommodate the equivalent of 30 days' worth of indexed data." Is this the same as HOT bucket data? or is it ontop of the hot data? e.g assuming the following factors: Intake = 100GB/day Compression ratio = 0.50 Hot Retention = 14 days Using this formula found in another forum post: Global Cache sizing = Daily Ingest Rate x Compression Ratio x (RF x Hot Days + (Cached Days - Hot Days)) Cache sizing per indexer = Global Cache sizing / No.of indexers Cached Days = Splunk recommends 30 days for Splunk Enterprise and 90 days for Enterprise Security Hot days = Number of days before hot buckets roll over to warm buckets. Ideally this will be between 1 and 7 but configure this based on how hot buckets rolls in your environment. 100 * .50 ( 2 x 14 + (30-14)) = 2200?
I am new to splunk. I have a DB connection from where I am fetching a table. I want to create a dashboard for with x-axis as time and Y-axis as count of table in every hour. i tried with timechart... See more...
I am new to splunk. I have a DB connection from where I am fetching a table. I want to create a dashboard for with x-axis as time and Y-axis as count of table in every hour. i tried with timechart function but I am unable to achive my goal. I am getting data without timechart. | dbxquery query="SELECT * FROM \"CASE\"" | timechart count by Id this is my query.
Is it possible to do RBAC without indexes ? I have 5 indexes at least, but I can’t use indexes to do RBAC because all users should see all 5 indexes, but the requirement is that they should only see ... See more...
Is it possible to do RBAC without indexes ? I have 5 indexes at least, but I can’t use indexes to do RBAC because all users should see all 5 indexes, but the requirement is that they should only see their data. If I ensure that the data is tagged at each of the users location, will it be possible to use these tags to only allow users that work at a specific location to be able to see their data and their data only from the 5 different indexes available ? I like RBAC indexes because it ensures that users will not see any data even if they write their own searches because they simply don’t have access to the indexes that they weren’t assigned access to but unfortunately this doesn’t work because we already indexed , and we can’t do that so we have to rely on another attribute or tag to filter the data. Please let me know if you can suggest anything.
I cannot find any relevant documentation for installing netviz agents on app agents. Are there any steps to install the netviz agents on app agents?
Hello. I wonder about the configuration of phantom. Question 1. Most of company in Korea need to separated network such as air-gap. All employees use a separate PC from the Internet and a i... See more...
Hello. I wonder about the configuration of phantom. Question 1. Most of company in Korea need to separated network such as air-gap. All employees use a separate PC from the Internet and a internal(for working) PC. and each user uses two PC that is internet and internal PC. Of course my office also have that problems. One user operates the security control system for the internal and external networks. Since the communication between the two networks is not possible, the phantom must be operated separately. In this situation, do I need to purchase phantom seat licenses for each network? Or do I only have to buy one per user? Question 2. phantom's competitor, demisto, introduced the concept of Engine (proxy) to prepare for this environment. The engine is described below. Demisto Engines Demisto engines are proxy servers installed on-premise that enable the unified functioning of diverse security environments without compromising any firewall or network restrictions. Users can download engines from the Demisto interface and choose which integrations to deploy through engines. All communication between engines and the Demisto server is conducted over HTTPS Does phantom provide a secure way to connect to other networks with the same concept as demisto's engine? Question 3. I already knew phantom provides clustering. For splunk enterprise, the purpose of clustering and the role for each node are very clear. However, it is so difficult for why nodes exist, what role each node has, and why it should be clustered in Phantom I would like to know a detailed explanation of clustering. Thanks,