All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are planning to upgrade ES from 6.6.2 to 7.0.1, one of the new features will have a pop up window indicating that a new Content Update version is available and allows for the option to upgrade to ... See more...
We are planning to upgrade ES from 6.6.2 to 7.0.1, one of the new features will have a pop up window indicating that a new Content Update version is available and allows for the option to upgrade to the new version.  We'd like to suppress this pop up and/or prevent the update through the UI.  Would either of the below two settings prevent the pop up?  If we can't suppress the pop up will either of the below two settings help prevent the update from occurring? web.conf: Setting 'updateCheckerBaseURL' to 0 stops Splunk Web from pinging  Splunk.com for new versions of Splunk software. app.conf: Setting 'check_for_updates' to 0, this setting determines whether Splunk Enterprise checks Splunkbase for updates to this app. https://docs.splunk.com/Documentation/ES/7.0.0/RN/Enhancements Automated updates for the Splunk ES Content Update (ESCU) app When new security content is available, the update process is built into Splunk Enterprise Security so that ES admins always have the latest security content from the Splunk Security Research Team.
Hello all!  Newbie here so please forgive the ignorance in advance! I have a search: index="zscaler" reason="Reputation block outbound request: malicious URL" |dedup _time |stats count as siteCou... See more...
Hello all!  Newbie here so please forgive the ignorance in advance! I have a search: index="zscaler" reason="Reputation block outbound request: malicious URL" |dedup _time |stats count as siteCount by url,user |where siteCount > 3 |search earliest=-24h When running this search in the search bar, the time picker is overriding the 24 hour search criteria, which from what I read in the documentation shouldn't occur (unless it's a subsearch).  I'm using this for alerting purposes so I want to be sure to specify the time frame I'd like to search.  Any suggestions?   
Hello,
Hi there after much searching and testing i feel i'm stuck. Or even unsure what i want is possible.  What i want I have _json data indexed. Each event is a long array. I want Splunk to automaticall... See more...
Hi there after much searching and testing i feel i'm stuck. Or even unsure what i want is possible.  What i want I have _json data indexed. Each event is a long array. I want Splunk to automatically make key:value pairs per value. Until now, Splunk gives me all the values instead of 1 single value. Also it seems Splunk can't make correlations between fields.  I want to use fields so i can do simple searches, like making a table for "internal" "website_url"s and their  status ("up" or "down").    Example event {"data":[{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456","123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234564","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":[],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234560","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234568","paused":false,"name":"adyen","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234567","paused":false,"name":"paynl","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234562","paused":false,"name":"trustpay","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234566","paused":false,"name":"spryng","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external","sms gateway"],"uptime":100},{"id":"1234568","paused":false,"name":"messagebird","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234564","paused":false,"name":"mitek","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234566","paused":false,"name":"bitstamp","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[external"],"uptime":100},{"id":"1234560","paused":false,"name":"kraken","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":3600,"contact_groups":[],"status":"up","tags":["external"],"uptime":100},{"id":"1234569","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234567","paused":false,"name":"Blox login","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234564","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100}],"metadata":{"page":1,"per_page":25,"page_count":2,"total_count":26}}   How far i got   source="/opt/splunk/etc/apps/randomname/bin/statuscake_api.sh" | spath output=id path=data{}.id | spath output=url path=data{}.website_url | spath output=status path=data{}.status | search id=8179640 | table id, url, status  However, it shows a table of all aray fields, not just one specific 'id' i specified in the search part | search id=<idnumber>   Screenshot  
Hi All, I have the following saved search: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [|`change_whitelist_generic`] nodename="All_Changes.Account_Manag... See more...
Hi All, I have the following saved search: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [|`change_whitelist_generic`] nodename="All_Changes.Account_Management.Accounts_Updated" AND All_Changes.log_region=* AND All_Changes.log_country=* AND (All_Changes.command=passwd OR All_Changes.result_id IN (4723, 4724)) by All_Changes.log_region, All_Changes.log_country, index, host, All_Changes.Account_Management.src_user, All_Changes.user, _time | `drop_dm_object_name("All_Changes")` | rename Account_Management.src_user as src_user My customer asked to me to exclude results when Account_Management.src_user=user1 and All_Changes.Account_Management.src_nt_domain=All_Changes.Account_Management.dest_nt_domain. So I tried something like that but it seems not working:   | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [| `change_whitelist_generic`] nodename="All_Changes.Account_Management.Accounts_Updated" AND All_Changes.log_region=* AND All_Changes.log_country=* AND (All_Changes.command=passwd OR All_Changes.result_id IN (4723, 4724)) by All_Changes.log_region, All_Changes.log_country, index, host, All_Changes.Account_Management.src_user, All_Changes.user, All_Changes.Account_Management.dest_nt_domain, All_Changes.Account_Management.src_nt_domain, _time | `drop_dm_object_name("All_Changes")` | search NOT (Account_Management.src_user=user1 AND Account_Management.src_nt_domain=Account_Management.dest_nt_domain) | rename Account_Management.src_user as src_user Have you any advice?   Thank you!
scenario : - I had a log file. I am able to extract the fields from the log event and also see the data in the extracted fields. But when I am filtering the data with use of extracted field, unable t... See more...
scenario : - I had a log file. I am able to extract the fields from the log event and also see the data in the extracted fields. But when I am filtering the data with use of extracted field, unable to see the results although it has data in the extracted field.  I have referred to this "http://blogs.splunk.com/2011/10/07/cannot-search-based-on-an-extracted-field/" link it doesn't help.  Please help us regarding this.
Hello, I am currently working on a use case which has complex ingested data with nested json. The data I am trying to capture is non compliant. I am looking for guidance on how to categorize the ne... See more...
Hello, I am currently working on a use case which has complex ingested data with nested json. The data I am trying to capture is non compliant. I am looking for guidance on how to categorize the nested json objects into fields within the array. Here is the redacted information I currently have, thank you! Search I am using: index=fsctcenter sourcetype=fsctcenter_json | regex "Non Compliant[^\:]+\:\"\d+\"\,\"status\":\"Match" | rex field=_raw "policy_name\":\"(?<policy_name>[a-zA-z1-9\.\s+]+Non\sCompliant[^\"]+)" | rex field=_raw "rule_name\":\"(?<rule_name>[a-zA-z1-9\.\s+]+Non\sCompliant[^\"]+)" Raw: {"ctupdate":"policyinfo","ip":"X.X.X.X","policies":[{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},...etc   List: policies: [ [-] { [-] policy_name: XXXX rule_name: XXXX since: XXXX status: XXXX } { [-] policy_name: XXXX rule_name: XXXX since: XXXX status: XXXX } Etc...   Currently Splunk ES is not itemizing the fields correctly for the nested json above. Any help or guidance would be greatly appreciated, thanks!
Hello folks, there is a tool that helps in sizing a server that will work with accelerate data models ? Or wich is the best way to achieve that goal? It seams that splunk base configuration 12c... See more...
Hello folks, there is a tool that helps in sizing a server that will work with accelerate data models ? Or wich is the best way to achieve that goal? It seams that splunk base configuration 12cpu/12gb of ram is not enoght. Thank you all.
I have a stats table with output in the below format: Device                          Timestamp        Action some value                some value.             1 some value              some va... See more...
I have a stats table with output in the below format: Device                          Timestamp        Action some value                some value.             1 some value              some value.              2 ..                     ..                                ..  some value                some value              10 some value                 some value               1 some value              some value.                  2 ..                     ..                                ..  some value                some value              10   So, the action column repeats the pattern after a certain number of iterations. How to group these into single fields, that is, each full iteration should be stored as a mv field.  
Hi Team, Is it possible to stop alert for particular time window. Suppose I have a alert already created and running and I want to stop it on a coming Saturday from 1 PM to 4PM. is it possible ... See more...
Hi Team, Is it possible to stop alert for particular time window. Suppose I have a alert already created and running and I want to stop it on a coming Saturday from 1 PM to 4PM. is it possible whiteout doing it manual or not by using cron scheduler ?  Please help. Thank you
Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me h... See more...
Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me how can I Capitalise first letter of every field name.
I am trying to send my cloudflare HTTP logs to my externally exposed splunk heavy forwarder (on prem). I have installed the Cloudflare App on the heavy forwarder and the searchead: https://splunkb... See more...
I am trying to send my cloudflare HTTP logs to my externally exposed splunk heavy forwarder (on prem). I have installed the Cloudflare App on the heavy forwarder and the searchead: https://splunkbase.splunk.com/app/4501/#/details I know the data is making it to my heavy forwarder that has the application installed. However, the data isn't being correctly ingested. I am finding this type of log on my _internal index on my forwarder, and it appears to be for each event that cloudflare has sent to my forwarder.  I have rebooted the forwarder since adding the application: 09-15-2022 10:16:22.804 -0400 WARN TcpOutputProc [5288 indexerPipe] - Pipeline data does not have indexKey. [_hecTeleVersionKey] = default\n[_hecTeleAppKey] = default\n[_raw] = \n[_meta] = punct::\n[MetaData:Source] = source::http:Cloudflare5xx\n[MetaData:Host] = host::readactedhost.com\n[MetaData:Sourcetype] = sourcetype::cloudflare:json\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_time] = 1663251382\n[_conf] = source::http:Cloudflare5xx|host::readactedhost.com|cloudflare:json|\n My HEC token is configured as: [http://Cloudflare5xx] description = Used to get cloudflare logs into splunk for server 5xx errors disabled = 0 indexes = cloudflare token = 7xxxxxxxx I am stumped what "Pipeline data does not have indexKey" means and cannot find a next step.  If the logs are being sent, and making it to the forwarder, are there more steps than having the application there to interpret the information sent to the services/collector/raw endpoint?  I have never ingested on the /raw endpoint before so I wonder if something is missing.
Been trying to work out the regex for the props and transforms for onbase logging and I keep hitting a brick wall. Here is a sample log of what I have been working on: <?xml version="1.0" encoding=... See more...
Been trying to work out the regex for the props and transforms for onbase logging and I keep hitting a brick wall. Here is a sample log of what I have been working on: <?xml version="1.0" encoding="utf-8"?> <diagnosticsLog type="error-profile" startDate="07/28/2022 01:10:20"> <!--Build 18.0.1.42--> <columns> <column friendlyName="time" name="time" /> <column friendlyName="Result" name="Result" /> <column friendlyName="Module" name="Module" /> <column friendlyName="Class" name="Class" /> <column friendlyName="SourceFile" name="SourceFile" /> <column friendlyName="Method" name="Method" /> <column friendlyName="SourceLine" name="SourceLine" /> <column friendlyName="Severity" name="Severity" /> <column friendlyName="MachineName" name="MachineName" /> <column friendlyName="IpAddress" name="IpAddress" /> <column friendlyName="ErrorId" name="ErrorId" /> <column friendlyName="ProcessID" name="ProcessID" /> <column friendlyName="ThreadID" name="ThreadID" /> <column friendlyName="TimeSpan" name="TimeSpan" /> <column friendlyName="User" name="User" /> <column friendlyName="HTTPSessionID" name="HTTPSessionID" /> <column friendlyName="HTTPForward" name="HTTPForward" /> <column friendlyName="SessionID" name="SessionID" /> <column friendlyName="SessionGUID" name="SessionGUID" /> <column friendlyName="Datasource" name="Datasource" /> <column friendlyName="Sequence" name="Sequence" /> <column friendlyName="LocalSequence" name="LocalSequence" /> <column friendlyName="Message" name="Message" /> <column friendlyName="AppPoolName" name="AppPoolName" /> </columns> <rows> <row> <col name="time">07/28/2022 01:10:20</col> <col name="TimeSpan">N/A</col> <col name="ThreadID">0x0000000A</col> <col name="User"></col> <col name="HTTPSessionID"></col> <col name="HTTPForward"></col> <col name="SessionGUID"></col> <col name="SessionID">0</col> <col name="Datasource">OnBaseQA</col> <col name="AppPoolName"></col> <col name="IpAddress"></col> <col name="MachineName"></col> <col name="Result">0x00000000</col> <col name="Message">FileLoadException: File [D:\Program Files\Hyland\Services\Distribution\System.Runtime.CompilerServices.Unsafe.dll] Message [Could not load file or assembly 'System.Runtime.CompilerServices.Unsafe, Version=4.0.3.0, Culture=neutral, PublicKeyToken=abcd1234' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x901234)]</col> <col name="Module">Hyland.Core</col> <col name="Class">AssemblyRegistration</col> <col name="Method">ReadAssemblyXml</col> <col name="SourceFile"></col> <col name="SourceLine">0</col> <col name="Severity">Error</col> <col name="ErrorId"></col> </row> I'm trying to cut out or blacklist the first portion with the column naming and only grabbing the col name in between the "" with the corresponding data. I'm very new to manipulating data like this and am just starting to understand regex. These are my attempts on props and transforms:   Transforms: [log_time] DEST_KEY = MetaData:Sourcetype REGEX = <col name="time">(.*)<\/col> FORMAT = time::"$1" [time_span] DEST_KEY = MetaData:Sourcetype REGEX = <col name="TimeSpan">(.*)<\/col> FORMAT = time_span::"$1" [thread_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="ThreadID">(.*)<\/col> FORMAT = thread_id::"$1" [user] DEST_KEY = MetaData:Sourcetype REGEX = <col name="User">(.*)<\/col> FORMAT = user::"$1" [http_session_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="HTTPSessionID">(.*)<\/col> FORMAT = http_session_id::"$1" [http_forward] DEST_KEY = MetaData:Sourcetype REGEX = <col name="HTTPForward">(.*)<\/col> FORMAT = http_forward::"$1" [session_guid] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SessionGUID">(.*)<\/col> FORMAT = session_guid::"$1" [session_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SessionID">(.*)<\/col> FORMAT = session_id::"$1" [datasource] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Datasource">(.*)<\/col> FORMAT = datasource::"$1" [app_pool_name] DEST_KEY = MetaData:Sourcetype REGEX = <col name="AppPoolName">(.*)<\/col> FORMAT = app_pool_name::"$1" [ip_address] DEST_KEY = MetaData:Sourcetype REGEX = <col name="IpAddress">(.*)<\/col> FORMAT = ip_address::"$1" [machine_name] DEST_KEY = MetaData:Sourcetype REGEX = <col name="MachineName">(.*)<\/col> FORMAT = machine_name::"$1" [result] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Result">(.*)<\/col> FORMAT = result::"$1" [message] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Message">(.*)<\/col> FORMAT = message::"$1" [module] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Module">(.*)<\/col> FORMAT = module::"$1" [class] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Class">(.*)<\/col> FORMAT = class::"$1" [method] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Method">(.*)<\/col> FORMAT = method::"$1" [source_file] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SourceFile">(.*)<\/col> FORMAT = source_file::"$1" [source_line] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SourceLine">(.*)<\/col> FORMAT = source_line::"$1" [severity] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Severity">(.*)<\/col> FORMAT = severity::"$1" [error_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="ErrorId">(.*)<\/col> FORMAT = error_id::"$1"   Props: [error_profile] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true TRANSFORMS-set = log_time, time_span, thread_id, user, http_session_id, http_forward, session_guid, session_id, datasource, app_pool_name, ip_address, machine_name, result, message, module, class, method, source_file, source_line, severity, error_id
In DB Studio, is it possible to have a page that uses a submit button (so the panels don’t auto query on every filer change), but also have the dropdown filters dynamically cascade so that their retu... See more...
In DB Studio, is it possible to have a page that uses a submit button (so the panels don’t auto query on every filer change), but also have the dropdown filters dynamically cascade so that their returns will reflect only what is applicable in relation to the other filters without requiring submit to be pressed. Accomplishing this pretty simple without the Submit Button, but team requires a submit button on DBs to save in processing.
Hi, I am trying to build a correlation that matches traffic to threat intel to figure out if it has been blocked or not. It looks like this: Threat intel -> provides only information that given IP... See more...
Hi, I am trying to build a correlation that matches traffic to threat intel to figure out if it has been blocked or not. It looks like this: Threat intel -> provides only information that given IP is malicious and recommends blocking. Thats it. Traffic logs -> provides info on traffic that actually happened, both incoming and outgoing. Lets say that threat intel tells me that ip 1.2.3.4 is malicious. I do the following search:  `index=all_traffic sourcetype=traffic "1.2.3.4"` It shows me the IP in half/half src and dest fields, and other one is my own ip, as well as if this traffic has been dropped or not in a field called action I need to automate this process to show me traffic containing suspicious ip. I tried to join the following way: `[...getting data from threat intel...] | table suspicious_ip | join suspicious_ip [ search index=all_traffic sourcetype=traffic  | rename src as suspicious_traffic ]` This works only sometimes and for half of the cases, as it only looks as src and not dest. And it gets truncated because of subsearch limit. I also tried to connect them via parenthesis like this: `( from datamodel:"Threat_Intel"."Threat_Activity" ) AND (index=all_traffic sourcetype=traffic ) | stats values(*) as * by src, dest` But this didn't work because I cannot use from datamodel this way. My ideal output would be: suspicious_ip src dest action Any ideas?
Hello I am pulling data from a MS SQL Server database via App DB Connect. I have an UTC timestamp field in the returned dataset, which I map to Splunk's TIMESTAMP column to have it for the _time f... See more...
Hello I am pulling data from a MS SQL Server database via App DB Connect. I have an UTC timestamp field in the returned dataset, which I map to Splunk's TIMESTAMP column to have it for the _time field. Splunk vrs is 8.2, DB Connect is 3.6.0 The problem: The Splunk's _time field shows wrong hour, the difference to my local time vs UTC. Question: How do I tell Splunk (or DB Connect) that the incoming timestamp field is an UTC one? best regards Altin
I have a splunk container running on docker, and was hoping to translate the splunk index data into json using a cli search and saving the output as a local file. How to do this? Thanks in advance!
I've deployed below props to  extract the time splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (12) character... See more...
I've deployed below props to  extract the time splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (12) characters of event. Defaulting to timestamp of previous event.  please refer to the below log    Hounsaya     add_user      4               Thu Sep 15 10:09 - 26:39 (60+00:47)   Can you please help and let me know if i need to make any changes?  
Hi,  I have data like A-001, A-002, A-003..... I would like to know how to extract the numbers from these strings: 001, 002, 003....so that I can generate an alert: every three batche (001, 004, ... See more...
Hi,  I have data like A-001, A-002, A-003..... I would like to know how to extract the numbers from these strings: 001, 002, 003....so that I can generate an alert: every three batche (001, 004, 007) should be check. Can someone help me with that? Thanks1   Regards, Tong 
Hello everyone, Please, I need to extract a field named product (with its value in bold) from the below Message field values, and a field named status (with its value in italics): Message="Produc... See more...
Hello everyone, Please, I need to extract a field named product (with its value in bold) from the below Message field values, and a field named status (with its value in italics): Message="Product: Microsoft SQL Server 2019 LocalDB -- Installation completed successfully." Message="Product: Microsoft OLE DB Driver for SQL Server -- Installation completed successfully." Thank you in advance.