All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am trying to tune up a notable called DNS Query Length Outliers Using the MLTK App to set up the data, but the number of the notables remain the same. Am I doing something wrong? I follo... See more...
Hi All, I am trying to tune up a notable called DNS Query Length Outliers Using the MLTK App to set up the data, but the number of the notables remain the same. Am I doing something wrong? I followed some instructions on how to build the data model required for the notable to work, but still no luck. Worth mention that when I run the SPL in the Search, it delivers different number of notables. What  option shall I use from the "Experiments" within the MLTK App to make the data work for the notable. The code is from here: https://github.com/splunk/security_content/blob/develop/detections/experimental/network/dns_query_length_outliers___mltk.yml   Thank you in advance.
Will your SSL Certificate Checker work for Windows domain controllers?
Hi All,  I am getting below error Send Microsoft SQL Server table data to splunk index through splunk DB Connect 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.dbx.server.dbin... See more...
Hi All,  I am getting below error Send Microsoft SQL Server table data to splunk index through splunk DB Connect 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action=write_records batch_size=1000 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector 2023-01-19 14:02:37.006 +0000 [QuartzScheduler_Worker-19] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector record_count=1000 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://127.0.0.1:8088: Connection reset by peer (Write failed) 2023-01-19 14:02:37.007 +0000 [QuartzScheduler_Worker-19] INFO org.apache.http.impl.execchain.RetryExec - Retrying request to {}->http://127.0.0.1:8088 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketException: Connection reset at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:46) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:141) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:113) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorLoadBalancer.uploadEvents(HttpEventCollectorLoadBalancer.java:52) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:234) at org.easybatch.core.job.BatchJob.call(BatchJob.java:102) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.net.SocketException: Connection reset at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:46) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:141) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:113) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorLoadBalancer.uploadEvents(HttpEventCollectorLoadBalancer.java:52) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:234) at org.easybatch.core.job.BatchJob.call(BatchJob.java:102) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] INFO org.easybatch.core.job.BatchJob - Job 'testing_1234' finished with status: FAILED 2023-01-19 14:02:37.008 +0000 [QuartzScheduler_Worker-19] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=closing_db_reader task=testing_1234 2023-01-19 14:02:56.930 +0000 [dw-67 - PUT /api/inputs/testing_1234] INFO com.splunk.dbx.server.task.DefaultTaskService - action=removing_task_from_scheduler task=testing_1234 type=input 2023-01-19 14:02:56.931 +0000 [dw-67 - PUT /api/inputs/testing_1234] INFO org.easybatch.extensions.quartz.JobScheduler - Unscheduling job org.easybatch.core.job.BatchJob@279b98a9 2023-01-19 14:09:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:19:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:29:44.304 +0000 [Checkpoint-Cleaner-0] INFO c.s.d.s.m.CheckpointCleaner$CheckpointCleanerTask - action=start_checkpoint_cleaner_task 2023-01-19 14:29:44.305 +0000 [Checkpoint-Cleaner-0] INFO c.s.d.s.m.CheckpointCleaner$CheckpointCleanerTask - action=checkpoint_files_to_be_deleted [] 2023-01-19 14:29:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:39:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:49:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1 2023-01-19 14:59:44.320 +0000 [Single-Thread-Executor-0] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=clean_up_connection_pools current_connection_pools_size=1
Hello Splunkers, I'm having this specific use case : I need to retrieve Windows OS logs from multiple machine via Splunk UF and Splunk Microsoft TA, forwards those logs to an intermediate HF in char... See more...
Hello Splunkers, I'm having this specific use case : I need to retrieve Windows OS logs from multiple machine via Splunk UF and Splunk Microsoft TA, forwards those logs to an intermediate HF in charge of parsing and anonymization, and then forward again to another HF. Since an image worth 1000 words, I imagine those two architecture possibilities :  Choice 1 : Choice 2 :   Points I wonder about : - Within choice 1, is it a problem to have props and transforms in a apps deployed on UFs ? Same question, is it a problem to have some inputs.conf on the Linux HF1 ? - I will have other use case like this, would you recommend to gather all anonymization (within props & transforms) within a dedicated app, or would you put those in each TA related to the use case (so basically here like choice 1 of architecture) ?   Thanks a lot for your help ! GaetanVP  
Hello, I have a Regex for splitting a Person full name into Person lastname, firstname and middlename. Regex used: (?<prsnl_last>\w+)([\s]*(?<prsnl_credentials>[\w]*)|)[\s]*,[\s]*(?<prsnl_first>\... See more...
Hello, I have a Regex for splitting a Person full name into Person lastname, firstname and middlename. Regex used: (?<prsnl_last>\w+)([\s]*(?<prsnl_credentials>[\w]*)|)[\s]*,[\s]*(?<prsnl_first>\w+)([\s]*(?<prsnl_middle>[\w]+.*?)|)   Now I would need to remove prsnl_last from the output. (Basically, to scrub the data from prsnl_last event) Output should be something like this. "Haikal" and "Campbeli" should be removed. Can someone please help me out? Thank you!
I'm trying to get data by registering it as a Splunk script using Python code. But the problem only occurs when I run this code in Splunk. When I run it directly in Splunk Python locally, there i... See more...
I'm trying to get data by registering it as a Splunk script using Python code. But the problem only occurs when I run this code in Splunk. When I run it directly in Splunk Python locally, there is no problem,   but when I run it from Splunk, I get the following error.   The error contents are as follows. Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known   So I searched and tried using http scheme and requests.get(verift=false), but the same error occurs. There is no error when I run it locally, so it seems to be a problem when sending a request from Splunk.     If anyone knows, I'd appreciate any help.     import requests as req import json def tmdb_api_call(requestURL, parameters): response = req.get(url=requestURL, params=parameters) if response.status_code != 200: print(response.json()) exit() data = response.json() return json.dumps(data) def get_upcoming_movies_by_page(api_key, page_number=1): requestURL = "http://api.themoviedb.org/3/movie/upcoming" parameters = {"api_key": api_key, "page": page_number} return tmdb_api_call(requestURL, parameters) def main(): api_key = "key" upcoming_movie_list = get_upcoming_movies_by_page(api_key, 1) data = json.loads(upcoming_movie_list) print(json.dumps(data["results"])) main()     (code from https://github.com/siddharthajuprod07/youtube)  
Hi, I recently came across this warning on Splunk web and was just wondering if anyone else has encountered this before and how to go about solving it? The warning is as follows: "Events are not d... See more...
Hi, I recently came across this warning on Splunk web and was just wondering if anyone else has encountered this before and how to go about solving it? The warning is as follows: "Events are not displayed in the search results because _raw fields exceed the limit of 16777216 characters. Ensure that _raw fields are below the given character limit or switch to the CSV serialization format by setting 'results_serial_format=csv' in limits.conf. Switching to the CSV serialization....." Any input is greatly appreciated and thank you in advance. Mikhael
Hi Splunkers, in our environment we have seen that we have some event peaks every day between certain hours, which are 03:00 - 04:00 AM.  Now, we have to identify which source are responsible for t... See more...
Hi Splunkers, in our environment we have seen that we have some event peaks every day between certain hours, which are 03:00 - 04:00 AM.  Now, we have to identify which source are responsible for those peaks. The search we must build has to perform this: tell us, for every day of a certain time period (for example, a week), at the same hour (03:00 - 04:00 AM), total count of events for every sources. I know that, with this search:   index=* | chart count over index by sourcetype   I'm able to show a column chart that spawn me the total count of events for every indexes and, for them, I have a picture of involved sourcetypes. But how to tell to search "show me for every day this count in time range 03:00 - 04:00" AM?
Hi, I have edited the inputs.conf file in app.tgz how we can compile it after editing the config file in windows.   ty
In my Splunk Cloud instance, I am ingesting WAF security events from a SaaS service via HEC. The events are in JSON format so my HEC data input is configured as a sourcetype of _json. I now need to ... See more...
In my Splunk Cloud instance, I am ingesting WAF security events from a SaaS service via HEC. The events are in JSON format so my HEC data input is configured as a sourcetype of _json. I now need to ingest the WAF request events from the Saas service, which are also in JSON format, so I'd like to send those to the same index, but with a different sourcetype to distinguish the two types of events.  How can I modify the sourcetype for the WAF security events  from _json to waf_sec and then create a new HEC data input for the WAF request events with a sourcetype of waf_req, yet retaining the JSON format? Thx
Hey folks,   I have a query as such    .. | ID="*" AND STATUS="*" | table _time ID STATUS     Here is the result which I got   .. | ID="*" AND STATUS="*" | table _time ID STATU... See more...
Hey folks,   I have a query as such    .. | ID="*" AND STATUS="*" | table _time ID STATUS     Here is the result which I got   .. | ID="*" AND STATUS="*" | table _time ID STATUS   If you notice here that for the same ID, I got all the status which logged, but I would like to have the latest status here   I did try few things like   ... | ID="*" AND STATUS="*" | stats latest(STATUS) by ID | table _time ID STATUS     Here is what I got   I did get the unique ID's but rest of the fields are getting as null.   Could you please help me here
Does the Website Monitoring App support TLS v1.3 for target URLs? I haven't been able to find any documentation around the same and wasn't able to get it to work locally. Appreciate any help on it.
  I am using custom js from dashboard ui.xml         <form version="1.1" stylesheet="vulnerability_center.css" script="multiselect_input.js">         you can find my js file          ... See more...
  I am using custom js from dashboard ui.xml         <form version="1.1" stylesheet="vulnerability_center.css" script="multiselect_input.js">         you can find my js file           require([ 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_,mvc){ function setupMultiInput(instance_id) { var multiselect = mvc.Components.get(instance_id); const without = (array, filtered) => array.filter(n => n != filtered); if (multiselect){ multiselect.on("change", function(){ let current_val = multiselect.val(); let first_choice_value = multiselect.options.choices[0].value; if (current_val.length > 1 && current_val.indexOf(first_choice_value) == 0) { multiselect.val(without(current_val, first_choice_value)); } if (current_val.length > 1 && current_val.includes(first_choice_value) && current_val.indexOf(first_choice_value) != 0) { multiselect.val([first_choice_value]); } }); } }; var all_multi_selects = document.getElementsByClassName("input-multiselect"); for (let j = 0; j < all_multi_selects.length; j++) { setupMultiInput(all_multi_selects[j].id); } } )            
Hi, I have a requirement to alert all users who have pressed "export" from Splunk. I have written the spl for listing users who have exported search results or dashboard panels.     index=_... See more...
Hi, I have a requirement to alert all users who have pressed "export" from Splunk. I have written the spl for listing users who have exported search results or dashboard panels.     index=_internal export | regex uri_path="(jobs|results|events)\/export$" | table user | dedup user     But this is not catching the dashboard exports. I want to alert users who have exported the complete dashboard in pdf format. Kind help will be appreciated.
Running the Customer Success Toolkit's error report I noticed a warning on lots of Universal Forwarders that doesn't make sense to me: 19.01.23 10:49:53,614 01-19-2023 10:49:53.614 +0... See more...
Running the Customer Success Toolkit's error report I noticed a warning on lots of Universal Forwarders that doesn't make sense to me: 19.01.23 10:49:53,614 01-19-2023 10:49:53.614 +0100 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts host = Unix Box source = /opt/forwarder/data/var/log/splunk/splunkd.log sourcetype = splunkd 19.01.23 10:33:28,659 01-19-2023 10:33:28.659 +0100 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts host = Windows Box source = C:\Program Files\Splunk\UniversalForwarder\var\log\splunk\splunkd.log sourcetype = splunkd   Our Universal Forwarders have no distsearch.conf. Any idea why this is reported? And how to turn it off? We already have enough noise in our data. Thanks in advance Volkmar
I have 6 panels on my dashboard and each one of then shows number of customers going through each phase of journey. Take this made up example: Panel 1:       Panel 2:        Panel 3:               ... See more...
I have 6 panels on my dashboard and each one of then shows number of customers going through each phase of journey. Take this made up example: Panel 1:       Panel 2:        Panel 3:               Panel 4:        Panel5:  400                    250              330                  120                      80   Is it possible add panel 6, which automatically shows the  difference of values from Panel 1 and Panel 2 The queries behind are very complex so I don't re run the same two queries inside Panel 6 to get this value.       
Hi Team, We have a requirement to filter out the events from the IIS logs if the event contains ""GET / - 80 -" OR "GET / - 443 -" in the logs before the ingestion.  QUERY: index="*"  "GET / - ... See more...
Hi Team, We have a requirement to filter out the events from the IIS logs if the event contains ""GET / - 80 -" OR "GET / - 443 -" in the logs before the ingestion.  QUERY: index="*"  "GET / - 80 -" OR "GET / - 443 -" Sample Format: 2022-12-12 00:38:10 xx.yyy.zzz.aaa GET / - 80 - xx.yyy.z.a - - x00 0 0 0 2022-12-12 00:44:45 xx.yyy.zzz.aaa GET / - 443 - xx.yyy.z.a - - y00 0 0 x   Hence kindly help with the relevant props and transforms for the same.
The position of IP address is getting changed(appearing before or after https) in the logs, in such scenario how regex query can be written to extract source IP address. Please guide me. Below ar... See more...
The position of IP address is getting changed(appearing before or after https) in the logs, in such scenario how regex query can be written to extract source IP address. Please guide me. Below are the sample logs. <14>Jan 19 04:32:59 XXXX  accesslog_SIEM: Info: 1674102779.121 7 - 10.130.130.152 TCP_DENIED_SSL/403 0 POST https://activity.windows.com:443/v3/feeds/me/$batch - v3/feeds/me/$batch "Domainname\user@Domainname" - DROP_WEBCAT_7-BGC.BlockInternetAccess.DP-DOMPVM.Generalusers.ID-NONE-NONE-NONE-NONE-NONE - 61519 activity.windows.com 443 1 IW_comp 5.0 - - - - - - - - - - - - - - - - - IW_comp - "Computers and Internet" "Unknown" "Unknown" - - 0.00 0 - - - - - - - - - - - - - "SGPlatform 2.0" 21040   <14>Jan 19 04:32:59 XXXX accesslog_SIEM: Info: 1674102778.930 114 "https://www.XXX.com/English/home/default.aspx" 10.130.80.223 TCP_CLIENT_REFRESH_MISS_SSL/200 785 GET https://px.ads.linkedin.com:443/collect?v=2&fmt=js&pid=4881225&time=1674102778231&url=https%3A%2F%2Fwww.XXX.com%2FEnglish%2Fhome%2Fdefault.aspx&cookiesTest=true&liSync=true px.ads.linkedin.com collect?v=2&fmt=js&pid=4881225&time=1674102778231&url=https%3A%2F%2Fwww.XXX.com%2FEnglish%2Fhome%2Fdefault.aspx&cookiesTest=true&liSync=true - application/javascript ALLOW_CUSTOMCAT_12-ZAMLUM.FullAccess.AP-AuthByPass.ID-NONE-NONE-NONE-DefaultGroup-NONE - 54083 px.ads.linkedin.com 443 1 C_Allo 4.0 - - - - - - - - - - - - - - - - - IW_snet - "Social Networking" "Unknown" "Unknown" - - 55.09 0 - - - - - - - - - - - - - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) browser/2020.2.6 Chrome/87.0.4280.141 Electron/11.3.99 Safari/537.36 PingdomTMS/2020.2" 1162
Hello Everyone,  I'm new to regex, can you please support to extract URL name only until .com or .net only. This regex GET\s\w+:(?<URL>[^"]+), capturing whole thing, but I would require to captur... See more...
Hello Everyone,  I'm new to regex, can you please support to extract URL name only until .com or .net only. This regex GET\s\w+:(?<URL>[^"]+), capturing whole thing, but I would require to capture only until .com and .net.   Also please support to get the fields http_method, status below is the sample log line. <14>Jan 19 04:32:59 XXXXXX accesslog_SIEM: Info: 1674102779.113 336 - 10.X.X.X TCP_MISS/200 271 GET http://us-hnl-anx-r001.router.teamviewer.com/din.aspx?s=00000000&id=909083993&client=DynGate&p=10000001 us-hnl-anx-r001.router.teamviewer.com din.aspx?s=00000000&id=909083993&client=DynGate&p=10000001 - application/octet-stream DEFAULT_CASE_12-DOMPVM.WebControl.AP-DOMPVM.WebControl.ID-NONE-NONE-NONE-DefaultGroup-NONE - 53843 us-hnl-anx-r001.router.teamviewer.com 80 1 IW_meet 5.0 0 - "0" 0 0 1 - - - - - 0 0 - - - - IW_meet - "Online Meetings" "TeamViewer" "Presentation / Conferencing" - - 6.45 0 - - 0 "Unknown" - 0 "Unknown" - - - - - - "Mozilla/4.0 (compatible; MSIE 6.0; DynGate)" 191 <14>Jan 19 04:32:59 XXXXX accesslog_SIEM: Info: 1674102779.121 7 - 10.130.130.152 TCP_DENIED_SSL/403 0 POST https://activity.windows.com:443/v3/feeds/me/$batch - v3/feeds/me/$batch "INDIADomain\username@INDIADomain" - DROP_WEBCAT_7-BGC.BlockInternetAccess.DP-DOMPVM.Generalusers.ID-NONE-NONE-NONE-NONE-NONE - 61519 activity.windows.com 443 1 IW_comp 5.0 - - - - - - - - - - - - - - - - - IW_comp - "Computers and Internet" "Unknown" "Unknown" - - 0.00 0 - - - - - - - - - - - - - "SGPlatform 2.0" 21040
I have a timechart which I would like to chart in dashboard studio. | eval Len=length(_raw) | timechart sum(Len) AS Size count AS Count BY index Counts and Sizes should go on separate axes, obvi... See more...
I have a timechart which I would like to chart in dashboard studio. | eval Len=length(_raw) | timechart sum(Len) AS Size count AS Count BY index Counts and Sizes should go on separate axes, obviously, but I don't understand how to get them there! I had some limited success with y2fields, but couldn't use the same format with yFields and I just don't understand "> y | getField()" controls for fields in tile options. Advice and assistance sppreciated!