All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , Could someone help me with the below issue In splunk cloud I have 500+ events and each event contains 100+ lines of data. while exporting in CSV file single event is splitting in different row... See more...
Hi , Could someone help me with the below issue In splunk cloud I have 500+ events and each event contains 100+ lines of data. while exporting in CSV file single event is splitting in different rows which should not happen. I need the data same as the splunk results row wise without splitting Is there an limitation per single row while exporting in csv file? Here is the screenshot for reference, where 2nd and 3rd rows are single event(but splitted in 2 rows) and 5&6 single event and 8&9 single event,  data from 4th and 7th row is fine
I have a field, let's say the user field, that has both usernames without a domain and some with. I want the fields values that don't have an extension to have it added   Example: sparky1 sparky... See more...
I have a field, let's say the user field, that has both usernames without a domain and some with. I want the fields values that don't have an extension to have it added   Example: sparky1 sparky2@splunk.com   I want to be able to append splunk.com to the sparky1 value, without adding it again to sparky2@splunk.com
format 20211005000000 example 2021/10/05 with the time in another field
Hello, I followed the documentation to export health rules from one app as follows:  curl -k --user admin@@customer1:password https://controllerFQDN:8181/controller/healthrules/35 >> healthrules.xm... See more...
Hello, I followed the documentation to export health rules from one app as follows:  curl -k --user admin@@customer1:password https://controllerFQDN:8181/controller/healthrules/35 >> healthrules.xml Then I tried importing the health rules to a different App using the following:  curl -k -X POST --user admin@customer1:password https://controllerFQDN:8181/controller/healthrules/52 -F file=@healthrule.xml I get the following error:  "Min triggers should be within 0 and 1." I am not sure what that means or if I am doing anything wrong. I followed the documentation exactly as written. Thanks,
I also need to see who may have created the lookup. After finding the "broken Lookups list" was planning to fix them. Thank u a million
how to pull data from JIRA ID, and use the value pulled from JIRA in splunk search query
I have a search that I need to filter by a field, using another search. Normally, I would do this: main_search where [subsearch | table field_filtered | format ] It works like this: main_search ... See more...
I have a search that I need to filter by a field, using another search. Normally, I would do this: main_search where [subsearch | table field_filtered | format ] It works like this: main_search for result in subsearch: field_filtered=result In my case, I need to use each result of subsearch as filter BUT as "contains" and not "equal to". I tried something like this but is not working: main_search | where in (field_filtered,[subsearch]) How can I success in this?
I have a single-instance Splunk setup with a handful of Universal Forwarders sending in data. There was previously a different architecture on this network, but this is a new build from the ground up... See more...
I have a single-instance Splunk setup with a handful of Universal Forwarders sending in data. There was previously a different architecture on this network, but this is a new build from the ground up - everything is new builds and fresh installs (all version 8.2.2.1; server is RHEL 8; clients are Windows 10).  My UFs are installed with command line options to set the forwarding server and deployer (the same place). However, periodically, the clients' outputs.conf and deploymentclient.conf are being overwritten, and I cannot for the life of me figure out why. The settings being pushed in are for the old architecture, none of which remains on the network. Also, notably, it seems to only be the Windows UFs that are getting their settings overwritten - my *nix boxes do not appear to be affected as of now.  I have attached a ProcMon to monitor the file edits. The changes are coming from splunkd.exe via the REST API:  C:\Program Files\SplunkUniversalForwarder\bin\splunkd rest --noauth POST /services/data/outputs/tcp/server/ name=wrong_server.domain.com:9997 C:\Program Files\SplunkUniversalForwarder\bin\splunkd rest --noauth POST /services/admin/deploymentclient/deployment-client/ targetUri=wrong_deployer.domain.com:8089 I haven't yet found a way to manually elicit this change, and the update interval seems to vary from just a few minutes to every couple of hours. I've scoured my Group Policy and have not found any relevant settings there.  I'm stumped. Does anyone have any ideas as to what may be doing this?
Is Checking the Splunkbase.com & reading it's description the only way? I have Splunk Enterprise "Core" and ES in my environment.  Thanks for your help in advance. 
My biggest problem here is probably phrasing the question I have a search in a dashboard that buckets things into a 30day time span, displayed in a barchart e.g. 30-60    ----------------------... See more...
My biggest problem here is probably phrasing the question I have a search in a dashboard that buckets things into a 30day time span, displayed in a barchart e.g. 30-60    -------------------------- 60-90    ------------------------------------ 120-150  ----- so that's days bucketed against a count of "things" I'd like to setup a drill down so that the panel below shows the specific "things" in the clicked bucket. Drill down is currently set to set a token, but obviously that token is being set to something like "90-120" how do I utilize this in a meaningful manner? i.e. form a search where Days >= lower limit of bucket AND <= higher limit of the bucket. Any help or hints would be appreciated
Hi, I deployed the Exchange Addon TA-Windows-Exchange-IIS in our exchange servers and I confirm that I see IIS events coming in. The problem is that the events have two different IPs , one at the... See more...
Hi, I deployed the Exchange Addon TA-Windows-Exchange-IIS in our exchange servers and I confirm that I see IIS events coming in. The problem is that the events have two different IPs , one at the beginning of each line corresponds to our exchange servers and the second (at the end of the line), that often corresponds to public IP address of the remote clients. Unfortunately the field extraction of the addon only takes the first IP. Is there anything that I might be missing? I can see that there is an additional addon for IIS in splunkbase. Is it better to use this instead?  we are using Exchange 2016. Thanks a lot    
Hello Splunkers, I created a html button, on my splunk Dashboard. Now i want to click that button, and on click, i want a pop-up, that's having a csv file.   TIA,
Hi All, I am trying to create a regular expression to extract a value from a given log. Below is the log: 2021-10-05 07:25:42.986, DATUM2="3095", STATUS="2", REQUEST_TYPE="103", PRIORITY="300", OWN... See more...
Hi All, I am trying to create a regular expression to extract a value from a given log. Below is the log: 2021-10-05 07:25:42.986, DATUM2="3095", STATUS="2", REQUEST_TYPE="103", PRIORITY="300", OWNER="490070", COUNT(1)="2" Here I want to extract value of "COUNT(1)" and created the regular expression (?ms)COUNT\(1\)\=\"(?P<COUNT(1)>\d+)\" But with this expression I am not able to get the field name as "COUNT(1)" which is my requirement. Please help modify my expression to get the desired output.   Thank you very much.
Hye. The situation :  an instance of Splunk standalone (test platform), and an UF. The data : JSON Stream with multi level. The problem : the volume of data being important, we would like to reduc... See more...
Hye. The situation :  an instance of Splunk standalone (test platform), and an UF. The data : JSON Stream with multi level. The problem : the volume of data being important, we would like to reduce the _raw at only one field. But all JSON fields are saved as _meta. We have succeeded to update source, sourcetype and host from the JSON datas. But impossible to omit _meta ... (they always appear in the Search Head) IN :  { "input":{      "type":"log"}, "log":{      "file":"c:\log.josn"}, "@metadata":{      "beat":"filebeat",      "version":"7.10.2"}, "message":"bla bla bla", "fields":{      "type":"bdc",      "host":"VLCR03",      "type2":"back"} } OUT :  _raw  : "bla bla bla" <= OK meta "input.***" <= to suppress meta "log.***" <= to suppress meta "@metadata.beat" <= to keep meta "@metadata.version"<= to suppress meta "message"<= to suppress meta "fields.***" <= to suppress props.conf on the UF SHOULD_LINEMERGE = false NO_BINARY_CHECK = true CHARSET = AUTO KV_MODE = none AUTO_KV_JSON = false INDEXED_EXTRACTIONS = JSON TRANSFORMS-x = set_host set_source set_sourcetype TRANSFORMS-y = extract_message TRANSFORMS-z = remove_metadata transforms.conf on the UF [extract_message] SOURCE_KEY = field:message REGEX = (.*) FORMAT = $1 DEST_KEY = _raw [set_host] SOURCE_KEY = field:fields.host REGEX = (.*) FORMAT = host::$1 DEST_KEY = MetaData:Host [set_source] SOURCE_KEY = field:log.file REGEX = (.*) FORMAT = source::$1 DEST_KEY = MetaData:Source [set_sourcetype] SOURCE_KEY = fields:fields.type,fields.type2 REGEX = (.*)\s(.*) FORMAT = sourcetype::$1:$2 DEST_KEY = MetaData:Sourcetype [remove_message] SOURCE_KEY = _meta:message REGEX = (.*) DEST_KEY = queue FORMAT = nullQueue
Hello All, I am testing the upgrade from ES 6.2.0 to 6.6.2.  When I do the upgrade it fails with OSError type 28 no space left of device.  But there is almost 30GB of disk space free.     202... See more...
Hello All, I am testing the upgrade from ES 6.2.0 to 6.6.2.  When I do the upgrade it fails with OSError type 28 no space left of device.  But there is almost 30GB of disk space free.     2021-10-04 19:18:28,028 INFO [615bb5deed7f2dc4595650] _cplogging:216 - [04/Oct/2021:19:18:28] HTTP Request Headers: Remote-Addr: 127.0.0.1 TE: chunked HOST: splunk-sh1.wv.mentorg.com:8000 ACCEPT-ENCODING: gzip, br CACHE-CONTROL: max-age=0 SEC-CH-UA: "Google Chrome";v="93", " Not;A Brand";v="99", "Chromium";v="93" SEC-CH-UA-MOBILE: ?0 SEC-CH-UA-PLATFORM: "Windows" UPGRADE-INSECURE-REQUESTS: 1 ORIGIN: null USER-AGENT: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36 ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 SEC-FETCH-SITE: same-origin SEC-FETCH-MODE: navigate SEC-FETCH-USER: ?1 SEC-FETCH-DEST: document ACCEPT-LANGUAGE: en-US,en;q=0.9 COOKIE: splunkweb_csrf_token_8000=[REDACTED]5649; session_id_8000=[REDACTED]5b74; token_key=[REDACTED]5649; experience_id=[REDACTED]b0c2; splunkd_8000=[REDACTED]tgchx REMOTE-USER: admin X-SPLUNKD: SKdIpkhtf8PlfUDwvOLunA== 11626949294704615649 ijbs1HY^4Ms541EE5sF6eqHg^iyD5t6QKZRByWhdMDXkj546^eB1lT6y59b9LewgHbLcz0Xa5SKotHijcl__zWhYqh8MZISrCqYVxuLkY7jijwyyXijSUQ9VAJRlcQA3o7tgchx 0 Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryO0HdVIPxgJr5HUZN Content-Length: 675766277 2021-10-04 19:18:28,029 INFO [615bb5deed7f2dc4595650] error:333 - POST /en-US/manager/appinstall/_upload 127.0.0.1 8065 2021-10-04 19:18:28,029 INFO [615bb5deed7f2dc4595650] error:334 - 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. 2021-10-04 19:18:28,029 ERROR [615bb5deed7f2dc4595650] error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 680, in _do_respond self.body.process() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 982, in process super(RequestBody, self).process() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 559, in process proc(self) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 225, in process_multipart_form_data process_multipart(entity) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 217, in process_multipart part.process() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 557, in process self.default_proc() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 717, in default_proc self.file = self.read_into_file() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 732, in read_into_file self.read_lines_to_boundary(fp_out=fp_out) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 702, in read_lines_to_boundary fp_out.write(line) OSError: [Errno 28] No space left on device           As you can see there should be plenty of room for a 670MB upload     splunk@splunk-sh1:~/var/log/splunk> df -kh /opt/splunk Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-splunk 74G 44G 27G 63% /opt splunk@splunk-sh1:~/var/log/splunk>     Web.conf     splunk@splunk-sh1:~/var/log/splunk> more ~/etc/system/local/web.conf [settings] login_content = <h1> <CENTER>Splunk Dev Search Head</CENTER> </h1> max_upload_size = 1024 enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/splunkweb/com.key caCertPath = /opt/splunk/etc/auth/splunkweb/expJun2022.crt splunkdConnectionTimeout = 1400 tools.sessions.timeout = 180 sslVersions = ssl3,tls cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH splunk@splunk-sh1:~/var/log/splunk>     So I am confused why it would say that there is no space left of the device. Thanks ed
Hi, Updated: I am trying to break events which is in nested json. Each events start with  { "links":  I have almost got it working. Just small part left is that now after each event there is o... See more...
Hi, Updated: I am trying to break events which is in nested json. Each events start with  { "links":  I have almost got it working. Just small part left is that now after each event there is one " ,  "  and due to this event is not recognized as json event. Any idea how to remove it. Screenshot.   Props.conf 95% working props. CHARSET = UTF-8 DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = ([\r\n,]*(?:{[^[{]+\[)?){"links" NO_BINARY_CHECK = true SEDCMD-removefooter = s/(\]\,).*//g SEDCMD-removeheader = s/\{\"data\": \[//g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ TIME_PREFIX = "endTime": " TRUNCATE = category = Custom description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true Sample data for 3 events ( each event starts with {"links": )  FYI: there is another 4th string  {"links" which is extra value which I will remove using regex. basically consider data only in [ ]. all other will be removed using regex as its unnecessary. {"data": [{"links": {"self": {"href": "/admin/jobs/81913"}, "file-lists": {"href": "https://test"}, "try-logs": {"href": "https://test"}}, "type": "job", "id": "81913", "attributes": {"jobId": 81913, "parentJobId": 0, "activeProcessId": 19776, "startTime": "2021-10-05T08:14:29.000Z", "endTime": "2021-10-05T08:14:53.000Z", "kilobytesDataTransferred": 0}}, {"links": {"self": {"href": "/admin/jobs/81912"}, "file-lists": {"href": "https://test"}, "try-logs": {"href": "https://test"}}, "type": "job", "id": "81912", "attributes": {"jobId": 81912, "parentJobId": 0,"startTime": "2021-10-05T08:14:04.000Z", "endTime": "2021-10-05T08:14:29.000Z", "jobQueueResource": "", "kilobytesDataTransferred": 0}}, {"links": {"self": {"href": "/admin/jobs/81911"}, "file-lists": {"href": "https://test"}, "try-logs": {"href": "https://test"}}, "type": "job", "id": "81911", "attributes": {"jobId": 81911, "parentJobId": 0, "startTime": "2021-10-05T05:44:01.000Z", "endTime": "2021-10-05T05:44:51.000Z", "kilobytesDataTransferred": 0}}], "meta": {"pagination": {"next": 10, "pages": 42, "last": 410, "offset": 0, "limit": 10, "count": 415, "page": 0, "first": 0}}, "links": {"next": {"href": "https://test"}, "self": {"href": "https://test"}, "last": {"href": "https://test"}, "first": {"href": "https://test"}}} Thanks
Hi there, we are trying to configure MS Graph API for Office 365 to process emails from mailboxes. Created an Azure Enterprise Application and gave required api access to the application. Administrat... See more...
Hi there, we are trying to configure MS Graph API for Office 365 to process emails from mailboxes. Created an Azure Enterprise Application and gave required api access to the application. Administrator has done the consent in the Azure portal. However when we try to connect to the app, it’s still asking to do the ‘test connection’ and asked admin consent. Is this a bug? And is there a way to use the phantom app without this consent being done via app (instead to be done in Azure portal)? thanks 
Hi, Im setting up an alert for data flow the alert build is when the application is not running it will send us an alert and i use trigger condition in the alert.  here is the search query  | eval ... See more...
Hi, Im setting up an alert for data flow the alert build is when the application is not running it will send us an alert and i use trigger condition in the alert.  here is the search query  | eval value1=if(like(sample, "value1"), 1,0), value2=if(like(sample, "value2"), 1,0), value3=if(like(sample, "value3"), 1,0) | stats sum(value1) as VALUE1, sum(value2) as VALUE2, sum(value3) as VALUE3 | table VALUE1, VALUE2, VALUE3   and for the alert condition i use this command  search VALUE1 = 0  "0" because in the sum it indicates that the 0 means data is not flowing in splunk meaning the application is down  Thanks in advance
hi I need to do a count on the field "titi" which exist in 2 different sourcetype following 2 conditions : the field "cit" is related to the sourcetype "citrix" and the field "domain" is related to... See more...
hi I need to do a count on the field "titi" which exist in 2 different sourcetype following 2 conditions : the field "cit" is related to the sourcetype "citrix" and the field "domain" is related to the sourcetype "web" And "host" exist in both sourcetype so I am doing something like this but i have no results index=tutu sourcetype=citrix OR sourcetype=web | search (cit<="3") AND domain=west | stats dc(titi) by host Is it enough to add a "by host" clause for matching the events or do I have to use a join command? thanks  
Hi, Has anyone worked with control-m logs in splunk. I want to understand what are the important attributes  we need to consider for building dashboard for control-M logs. Thanks