All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm currently undergoing a sizing exercise to determine how large of a Splunk license I need, and was wondering if anyone could help. A quick background: I've got a trial license of Splunk... See more...
Hello, I'm currently undergoing a sizing exercise to determine how large of a Splunk license I need, and was wondering if anyone could help. A quick background: I've got a trial license of Splunk Enterprise running on-prem as a single instance deployment with the InfoSec app, and I am preparing to deploy Universal Forwarders to a select group of systems that will send security-related events and logs that I'd like to have Splunk ingest and index. My organization is currently not interested in having Splunk ingest operations-type data, and want to keep the scope of what Splunk ingests and indexes limited to just security-related events. I do have a specific list of sources, events and event IDs I want to include in the inputs.conf file, but the question I have is that, will my single instance filter out all events that are not in the inputs.conf whitelist, and then report to me how much data (in GB) was ultimately ingested based on the inputs.conf whitelist? Or would I need to spin up another server that runs Splunk as a Heavy Forwarder, have the UFs point to that, and reconfigure the original Splunk instance to become a indexing / search head server? It's important for me to get accurate data on how much Splunk ingests so that I can work with their sales team to get the most accurate pricing for how big of a Splunk license my organization actually needs. I'm familiar with Splunk's workload licensing model, but the initial costs I've been tasked with obtaining are for the ingestion model. Please let me know if you need any additional information. Thanks in advance for any help you can provide! Jason
Hello Splunkers, for a project I'm working on, I would need to store different IDs in a variable after evaluating them with if or case. The idea is to check several conditions and if one or more ar... See more...
Hello Splunkers, for a project I'm working on, I would need to store different IDs in a variable after evaluating them with if or case. The idea is to check several conditions and if one or more are met, update the value of the variable. Example: event1: A | B | C  event2: A | C | E event3: B | F | G Conditions: if A is present  -> ID01 if B is present  -> ID02 if B is present  -> ID03 Result: event1: ID01,ID02,ID03 event2: ID01,ID03 event3: ID02 I tried to concatenate the results but with no success: | makeresults | eval letter1="A", letter2="B", letter3="C" | append [| makeresults | eval letter1="A", letter2="C", letter3="E"] | append [| makeresults | eval letter1="B", letter2="F", letter3="G"] | eval ID="" | eval ID=ID.if(letter1="A" OR letter2="A" OR letter3="A","ID01",NULL) | eval ID=ID.if(letter1="B" OR letter2="B" OR letter3="B",",ID02",NULL) | eval ID=ID.if(letter1="C" OR letter2="C" OR letter3="C",",ID02",NULL) |table letter1 letter2 letter3 ID
Hi, I have different log types like: <SQL > <TID: 0000000050> <RPC ID: 0002424958> <Queue: List > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:47:10.0461 */SELECT ... See more...
Hi, I have different log types like: <SQL > <TID: 0000000050> <RPC ID: 0002424958> <Queue: List > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:47:10.0461 */SELECT * FROM ( SELECT T226.C1,C600000451 FROM T226 WHERE (('CC0000132482648' = T226.C600000451) AND ('7459898' = T226.C600000001)) ORDER BY 1 ASC ) WHERE ROWNUM <= 2 Or <SQL > <TID: 0000000056> <RPC ID: 0002424078> <Queue: Fast > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:46:53.9515 */SELECT C999003082 FROM T226 WHERE C1 = 'CC0000272965790' I need to extract the CC* value, for example in this case CC0000132482648 (first log) and CC0000272965790 (second log). Thanks in advance!
Splunk forwarder is running in the host and sending the audit logs to Splunk instances through HEC. Now i want to send debug logs to another instance through another HEC end point. Is that possible t... See more...
Splunk forwarder is running in the host and sending the audit logs to Splunk instances through HEC. Now i want to send debug logs to another instance through another HEC end point. Is that possible to configure to HEC end points in Splunk forwarder?
Hello, I'm currently installing and configuring a new monitoring console. I've another instance which is License Master. I'm doing configuration with conf files. On my new MC, I've added in /op... See more...
Hello, I'm currently installing and configuring a new monitoring console. I've another instance which is License Master. I'm doing configuration with conf files. On my new MC, I've added in /opt/splunk/apps/my_app/local/server.conf [license] master_uri = https://xx.XX.XX.XX:8089 Flux are opened. Master_uri is up. On the new instance `Btool server list license --debug` shows only this file applyied : /opt/splunk/etc/system/default/server.conf I don't understand what I'm doing wrong. Any pointers ? Thanks a lot ! Ema
Hello, I am looking for some guidance please with regards to a CSV input I have that is automatically updated daily as part of the TA. I want to be able to extract data when a row has been update... See more...
Hello, I am looking for some guidance please with regards to a CSV input I have that is automatically updated daily as part of the TA. I want to be able to extract data when a row has been updated within the last 24 hours. The CSV for example has the following columns title, category, published_datetime I want to see the other values in the row when published_datetime is less than 24 hours, in UTC format. Thank you
My events are in json format. The  json path where my data is , is here   "alert.smtp-message.smtp-header" And with in "smtp-header", I have content like this,  from which I could use help in ex... See more...
My events are in json format. The  json path where my data is , is here   "alert.smtp-message.smtp-header" And with in "smtp-header", I have content like this,  from which I could use help in extracting some fields using rex. ============   "smtp-header": "Received: from mxdinx66.Gramyabnk.com (mxdinx66.Gramyabnk.com [159.45.78.215])\n\tby mn-svdc-epi-ran11.ist.Gramyabnk.net (Postfix) with ESMTP id 4JyJsN6m8kzVKnNg\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:28 +9999 (UTC)\nReceived: from pps.filterd (mxdinx66.Gramyabnk.com [127.9.9.1])\n\tby mxdinx66.Gramyabnk.com (8.16.9.42/8.16.9.42) with SMTP id 21EMIuas425197\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:28 GMT\nReceived: from mx9a-99994996.pphosted.com (mx9a-99994996.pphosted.com [295.229.165.191])\n\tby mxdinx66.Gramyabnk.com with ESMTP id 6e65wvawac-1\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA684 bits=256 verify=NOT)\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:27 +9999\nReceived: from pps.filterd (m9216616.ppops.net [127.9.9.1])\n\tby mx9b-99994996.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21EIDxq8928666\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:26 GMT\nAuthentication-Results: ppops.net;\n\tspf=pass smtp.mailfrom=info@efk.admin.ch;\n\tdkim=pass header.d=efk.admin.ch header.s=dkimkey1;\n\tdmarc=pass header.from=efk.admin.ch\nReceived: from mail11.admin.ch (mail11.admin.ch [162.26.62.11])\n\tby mx9b-99994996.pphosted.com (PPS) with ESMTPS id 6e625qnsf9-1\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA684 bits=256 verify=NOT)\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:26 +9999\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=efk.admin.ch; h=to\n\t:subject:date:to:from:reply-to:subject:message-id:mime-version\n\t:content-type:content-transfer-encoding; s=dkimkey1; bh=uoC6bt5q\n\thKVezRrk1ux9j7rGCMvkx/6cA9/rS1xbvwE=; b=V9mOEgc1tAyvbFpvkKFgHbnD\n\tHDh67iweoPEV7ZYCPpLW8KSBRU+uX+uL64xdJu9E1mp+BvITob98PRfIaCSIi6HC\n\tIf74+dtpxcVyfo9JXZmCj49tJdilXquYWoCu+OhLeONYd9/NMVs4S/IFHnYT/hmN\n\tNBzuP/5C6MKdlHavIwo=\nTo: \"Pretty Eloisa send you naughty videos https://vk.cc/cb5mIY\" <tran.cu@Gramyabnk.com>\nSubject: =?utf-8?Q?Pretty_Eloisa_send_you_naughty_videos_https://vk.cc/cb5mIY,_bitte?= =?utf-8?Q?_best=C6=A4tigen_Sie_ihre_EFK-Newsletter-Anmeldung?=\nDate: Mon, 14 Feb 2922 22:61:28 +9999\nTo: \"Pretty Eloisa send you naughty videos https://vk.cc/cb5mIY\" <tran.cu@Gramyabnk.com>\nFrom: \"Eidg. Finanzkontrolle\" <info@efk.admin.ch>\nReply-To: \"Eidg. Finanzkontrolle\" <info@efk.admin.ch>\nSubject: =?utf-8?Q?Pretty_Eloisa_send_you_naughty_videos_https://vk.cc/cb5mIY,_bitte?=\n =?utf-8?Q?_best=C6=A4tigen_Sie_ihre_EFK-Newsletter-Anmeldung?=\nMessage-ID: <MjQ1NzA5MwAC75229Y8BAMTY9NDg6Nzg4ODM6NzM@www.efk.admin.ch>\nContent-Type: multipart/alternative;\n\tboundary=\"b1_292f6ee91b9de8a92268de4c4ce5b57f\"\nX-TM-AS-GCONF: 99\nX-MSH-Id: E7195F2B6F624BA184EA6D9F12CD98AE\nContent-Transfer-Encoding: 7bit\nX-Proofpoint-GUID: 5sQWXU-CRjHoWtaxmd54Yn68A2IDf2Eu\nX-CLX-Shades: MLX\nX-Proofpoint-ORIG-GUID: 5sQWXU-CRjHoWtaxmd54Yn68A2IDf2Eu\nX-CLX-Response: 1TFkXGxgaEQpMehcaEQpZRBd6GF1SX9ZiBWNEcxEKWFgXbGdhYnBoGkBpaxo 7GxAHGRoRCnBsF6oeXwEBQkZDfXBTEAc ZGhEKcEwXZ1MfZ6t5RRkTE9AQGhEKbX4XGhEKWE9XSxEg\nMIME-Version: 1.9\nX-Brightmail-Tracker: True\nx-env-sender: info@efk.admin.ch\nX-Proofpoint-Virus-Version: vendor=nai engine=6699 definitions=19258 signatures=676461\nX-Proofpoint-Spam-Details: rule=inbound_aggressive_notspam policy=inbound_aggressive score=9\n clxscore=129 suspectscore=9 adultscore=9 bulkscore=9 mlxlogscore=472\n malwarescore=9 phishscore=9 spamscore=9 priorityscore=9 lowpriorityscore=9\n impostorscore=9 mlxscore=9 classifier=spam adjust=9 reason=mlx scancount=1\n engine=8.12.9-2291119999 definitions=main-2292149128",     ============================================== I just need the extraction of the fields present in the last 3 lines in bold. The values after the = sign , excluding the \n . clxscore suspectscore adultscore bulkscore mlgxscore malwarescore phishscore spamscore priorityscore owpriorityscore  impostorscore mlxscore classifier
Hi I am trying to onboard the streaming events from Salesforce into my Splunk and trying to use the 'Splunk Add-on for Salesforce Streaming API' for same. I have an http proxy at instance level to ... See more...
Hi I am trying to onboard the streaming events from Salesforce into my Splunk and trying to use the 'Splunk Add-on for Salesforce Streaming API' for same. I have an http proxy at instance level to allow the connect to the internet facing Salesforce Sandbox Instance. After setting up the required connection and inputs, the data is not getting onboarded. And I am getting following ERROR messages at ta_sfdc_streaming_api_sfdc_streaming_api_events.log My Splunk Version : 8.1.5 How to solve this?' ################## ERROR pid=434886 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/current/etc/apps/TA-sfdc-streaming-api/bin/sfdc_streaming_api_events.py", line 66, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/input_module_sfdc_streaming_api_events.py", line 26, in collect_events loop.run_until_complete(task) File "/opt/splunk/current/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/input_module_sfdc_streaming_api_events.py", line 61, in connect_sfdc async with sf_streaming_client as client: File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/exceptions.py", line 143, in async_wrapper return await func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/client.py", line 246, in __aenter__ return cast("Client", await super().__aenter__()) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiocometd/client.py", line 432, in __aenter__ await self.open() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/exceptions.py", line 143, in async_wrapper return await func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/client.py", line 143, in open await authenticator.authenticate() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/auth.py", line 100, in authenticate status_code, response_data = await self._authenticate() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/auth.py", line 187, in _authenticate response = await session.post(self._token_url, data=data) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiohttp/client.py", line 619, in _request break File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiohttp/helpers.py", line 656, in __exit__ raise asyncio.TimeoutError from None concurrent.futures._base.TimeoutError
Hi Everyone, So the goal here is to auto increment / decrement a value based on the position of character present in a string. For example : Here I am trying to pull and an assign a value to R ... See more...
Hi Everyone, So the goal here is to auto increment / decrement a value based on the position of character present in a string. For example : Here I am trying to pull and an assign a value to R This works but only when the "pos" is less than 3. I would like to assign the value for each and every position. Field1 = "RFTGQOASZ"   | makeresults | field1 = "RFTGQOASZ" | eval pos = len(mvindex(split(field1,"R"),0))+1 | eval value = 5 | eval pos1 = if(pos<3,value,0)   likewise the field1 value will change every time, I would like to assign a value based on the position. so let say if the "R" character is in the middle , auto decrement the value, something like i--.
Hello, everyone! I want to configure getting data in json format through splunk db connect. Database is mysql. Is it possible?
Hello, I'm experiencing some issues on my Search Heads. I'm getting this error on the search heads: The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster... See more...
Hello, I'm experiencing some issues on my Search Heads. I'm getting this error on the search heads: The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://hostname-cm1:8089. Could anyone recommend any fix or workaround?
Hi all, We are using Splunk Cloud, and I am using the https://http-inputs-mydomain.com/services/collector/raw to send a log file for ingestion. The problem is that each line in this log file ca... See more...
Hi all, We are using Splunk Cloud, and I am using the https://http-inputs-mydomain.com/services/collector/raw to send a log file for ingestion. The problem is that each line in this log file can be quite big, 25000 characters or more. Splunk Cloud is truncating at 10,000 characters. I can find steps for handling this on Splunk On-Prem for Heavy Log Forwarders etc. but doesn't seem to be addressed for the http-inputs on cloud. Any idea's on how I can change it to accept larger logs? Thanks, Chris
Hi, I am new to Splunk and struggling to create Line Graphs. I have a query which display a count for the month:       index="app" earliest=1640995200 latest=1643673600 | stats count AS Jan... See more...
Hi, I am new to Splunk and struggling to create Line Graphs. I have a query which display a count for the month:       index="app" earliest=1640995200 latest=1643673600 | stats count AS January | appendcols [search index="app" earliest=1643673600 latest=1646092800 | stats count AS February]       This is correctly presenting the information as I would expect it: I am almost certain my search is not correct, as when I attempt to plot this on a Line Graph the axis are not correct. I would like the Total number on the Y axis, and the months along the X axis.  Would appreciate any guidance. Thank you! 
Ehh, I have an annoying case. I'm monitoring a file over windows share (to make things even worse to troubleshoot is that I don't have direct access to the share from my administrative user; only t... See more...
Ehh, I have an annoying case. I'm monitoring a file over windows share (to make things even worse to troubleshoot is that I don't have direct access to the share from my administrative user; only the domain user the UF is running with has access). The file is a CSV, it's getting properly split into fields, the date is getting parsed OK. I have transforms for removing the header (and a footer - this file has some footer as well). And this works mostly well. Mostly, because every time there is data added to the file, the file is apparently getting recreated from scratch - new data is inserted before footer and I'm getting entries like 02-15-2022 10:55:23.008 +0100 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='\\path\to\the\file' Luckily, for now the file is relatively small (some 3k lines) and doesn't eat up much license compared to this customer's other sources but it's annoying that the same events are getting ingested several times during the day. The problem is that I don't see any reasonable way to avoid it. There is no deduplication functionality on input, I don't have any "buffer" I could compare it with using ingest-time eval or something like that. Any aces up your sleeves?
I am using the nginx app to ship nginx logs to Splunk, everything works well but intermittently I see a single event consisting of multiple nginx access loglines.  Nginx app itself has an EventBrea... See more...
I am using the nginx app to ship nginx logs to Splunk, everything works well but intermittently I see a single event consisting of multiple nginx access loglines.  Nginx app itself has an EventBreaker=enabled and Eventbreaker=regex. (This doesn't work 10-20% of the time). Can someone please help or am I missing something? My inputs.conf : [monitor:///var/log/nginx-access.log] index = artifactory disabled = false source = nginx-access sourcetype = nginx:plus:kv [monitor:///var/log/nginx-error.log] disabled = false sourcetype = nginx:plus:error index = artifactory source = nginx-error. Nginx app has already created props.conf at Search head cluster.
Hi All, I have one dashboard with multiple panels and its taking too much of time to load. I am trying to implement base search and sub search. I have one doubt in implementing it. I have the qu... See more...
Hi All, I have one dashboard with multiple panels and its taking too much of time to load. I am trying to implement base search and sub search. I have one doubt in implementing it. I have the queries with common until index,sourcetype and source....but i need to differentiate in one code assume its transaction id...and all the remaining query seems same. For ex: index=xyz  sourcetype="dtc:hsj" tcode="1324"  ----->for few queries index=xyz  sourcetype="dtc:hsj" tcode="1324"  OR tcode="234" ------>for few queries And the remaining part is same for all the queries..Is there a way that can i configure tcode in basesearch and used the same in subsearch Thanks in Advance
is there a way to execute the following process of the OS? ??   -Cluster master server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat ... See more...
is there a way to execute the following process of the OS? ??   -Cluster master server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / sbin / lsof / Usr / sbin / tcpdump ・ Search head server (Splunk Enterprise and Splunk Enterprise Security installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / bin / uname / Usr / sbin / lsof / Usr / sbin / tcpdump -Deployment server (Splunk Enterprise installed) / Usr / bin / eu-stack / Usr / bin / iostat / Usr / bin / netstat / Usr / bin / ps / Usr / bin / strace / Usr / sbin / lsof / Usr / sbin / tcpdump
I am looking for one requirement, can anyone please help us. i want to append a inputlookup table to my main table with the same column names and field names. Here is my main search results. ... See more...
I am looking for one requirement, can anyone please help us. i want to append a inputlookup table to my main table with the same column names and field names. Here is my main search results. Here is my inputlookup results Desired Output:  
hi I would like to know if it is possible to display automatically a chart radar from a lookup? radar.csv is the result of a scheduled search there is 3 fields in this csv : "sig_app" which cor... See more...
hi I would like to know if it is possible to display automatically a chart radar from a lookup? radar.csv is the result of a scheduled search there is 3 fields in this csv : "sig_app" which correspond to the radar "key" field, sig_cat which correspond to the radar "axis" field and count which correspond to the radar "value" field is it possible to do this or not?  thanks   | inputlookup radar.csv | eval sig_app=key | eval sig_cat=axis | eval count=value | eval key="Actions", AAA=.37, BBB=8.64, CCC=2.56, DDD=1.68, EEE=4.992 | untable key,"axis","value" | eval keyColor="magenta"  
Hi, I have a column chart that shows 1 field, but filters by others. I want the columns to be different for each value of a specific selection field in the input, but I can not set for each value... See more...
Hi, I have a column chart that shows 1 field, but filters by others. I want the columns to be different for each value of a specific selection field in the input, but I can not set for each value its color in the first place because I have something like 950 values (maybe more). So if the user chooses to look at 3 values in the filter, he will see each value in a different color and know the color of each value. I did not find anything like it, I just saw that the color should be set to each value from the beginning. Can you help me?