All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me h... See more...
Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me how can I Capitalise first letter of every field name.
I am trying to send my cloudflare HTTP logs to my externally exposed splunk heavy forwarder (on prem). I have installed the Cloudflare App on the heavy forwarder and the searchead: https://splunkb... See more...
I am trying to send my cloudflare HTTP logs to my externally exposed splunk heavy forwarder (on prem). I have installed the Cloudflare App on the heavy forwarder and the searchead: https://splunkbase.splunk.com/app/4501/#/details I know the data is making it to my heavy forwarder that has the application installed. However, the data isn't being correctly ingested. I am finding this type of log on my _internal index on my forwarder, and it appears to be for each event that cloudflare has sent to my forwarder.  I have rebooted the forwarder since adding the application: 09-15-2022 10:16:22.804 -0400 WARN TcpOutputProc [5288 indexerPipe] - Pipeline data does not have indexKey. [_hecTeleVersionKey] = default\n[_hecTeleAppKey] = default\n[_raw] = \n[_meta] = punct::\n[MetaData:Source] = source::http:Cloudflare5xx\n[MetaData:Host] = host::readactedhost.com\n[MetaData:Sourcetype] = sourcetype::cloudflare:json\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_time] = 1663251382\n[_conf] = source::http:Cloudflare5xx|host::readactedhost.com|cloudflare:json|\n My HEC token is configured as: [http://Cloudflare5xx] description = Used to get cloudflare logs into splunk for server 5xx errors disabled = 0 indexes = cloudflare token = 7xxxxxxxx I am stumped what "Pipeline data does not have indexKey" means and cannot find a next step.  If the logs are being sent, and making it to the forwarder, are there more steps than having the application there to interpret the information sent to the services/collector/raw endpoint?  I have never ingested on the /raw endpoint before so I wonder if something is missing.
Been trying to work out the regex for the props and transforms for onbase logging and I keep hitting a brick wall. Here is a sample log of what I have been working on: <?xml version="1.0" encoding=... See more...
Been trying to work out the regex for the props and transforms for onbase logging and I keep hitting a brick wall. Here is a sample log of what I have been working on: <?xml version="1.0" encoding="utf-8"?> <diagnosticsLog type="error-profile" startDate="07/28/2022 01:10:20"> <!--Build 18.0.1.42--> <columns> <column friendlyName="time" name="time" /> <column friendlyName="Result" name="Result" /> <column friendlyName="Module" name="Module" /> <column friendlyName="Class" name="Class" /> <column friendlyName="SourceFile" name="SourceFile" /> <column friendlyName="Method" name="Method" /> <column friendlyName="SourceLine" name="SourceLine" /> <column friendlyName="Severity" name="Severity" /> <column friendlyName="MachineName" name="MachineName" /> <column friendlyName="IpAddress" name="IpAddress" /> <column friendlyName="ErrorId" name="ErrorId" /> <column friendlyName="ProcessID" name="ProcessID" /> <column friendlyName="ThreadID" name="ThreadID" /> <column friendlyName="TimeSpan" name="TimeSpan" /> <column friendlyName="User" name="User" /> <column friendlyName="HTTPSessionID" name="HTTPSessionID" /> <column friendlyName="HTTPForward" name="HTTPForward" /> <column friendlyName="SessionID" name="SessionID" /> <column friendlyName="SessionGUID" name="SessionGUID" /> <column friendlyName="Datasource" name="Datasource" /> <column friendlyName="Sequence" name="Sequence" /> <column friendlyName="LocalSequence" name="LocalSequence" /> <column friendlyName="Message" name="Message" /> <column friendlyName="AppPoolName" name="AppPoolName" /> </columns> <rows> <row> <col name="time">07/28/2022 01:10:20</col> <col name="TimeSpan">N/A</col> <col name="ThreadID">0x0000000A</col> <col name="User"></col> <col name="HTTPSessionID"></col> <col name="HTTPForward"></col> <col name="SessionGUID"></col> <col name="SessionID">0</col> <col name="Datasource">OnBaseQA</col> <col name="AppPoolName"></col> <col name="IpAddress"></col> <col name="MachineName"></col> <col name="Result">0x00000000</col> <col name="Message">FileLoadException: File [D:\Program Files\Hyland\Services\Distribution\System.Runtime.CompilerServices.Unsafe.dll] Message [Could not load file or assembly 'System.Runtime.CompilerServices.Unsafe, Version=4.0.3.0, Culture=neutral, PublicKeyToken=abcd1234' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x901234)]</col> <col name="Module">Hyland.Core</col> <col name="Class">AssemblyRegistration</col> <col name="Method">ReadAssemblyXml</col> <col name="SourceFile"></col> <col name="SourceLine">0</col> <col name="Severity">Error</col> <col name="ErrorId"></col> </row> I'm trying to cut out or blacklist the first portion with the column naming and only grabbing the col name in between the "" with the corresponding data. I'm very new to manipulating data like this and am just starting to understand regex. These are my attempts on props and transforms:   Transforms: [log_time] DEST_KEY = MetaData:Sourcetype REGEX = <col name="time">(.*)<\/col> FORMAT = time::"$1" [time_span] DEST_KEY = MetaData:Sourcetype REGEX = <col name="TimeSpan">(.*)<\/col> FORMAT = time_span::"$1" [thread_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="ThreadID">(.*)<\/col> FORMAT = thread_id::"$1" [user] DEST_KEY = MetaData:Sourcetype REGEX = <col name="User">(.*)<\/col> FORMAT = user::"$1" [http_session_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="HTTPSessionID">(.*)<\/col> FORMAT = http_session_id::"$1" [http_forward] DEST_KEY = MetaData:Sourcetype REGEX = <col name="HTTPForward">(.*)<\/col> FORMAT = http_forward::"$1" [session_guid] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SessionGUID">(.*)<\/col> FORMAT = session_guid::"$1" [session_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SessionID">(.*)<\/col> FORMAT = session_id::"$1" [datasource] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Datasource">(.*)<\/col> FORMAT = datasource::"$1" [app_pool_name] DEST_KEY = MetaData:Sourcetype REGEX = <col name="AppPoolName">(.*)<\/col> FORMAT = app_pool_name::"$1" [ip_address] DEST_KEY = MetaData:Sourcetype REGEX = <col name="IpAddress">(.*)<\/col> FORMAT = ip_address::"$1" [machine_name] DEST_KEY = MetaData:Sourcetype REGEX = <col name="MachineName">(.*)<\/col> FORMAT = machine_name::"$1" [result] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Result">(.*)<\/col> FORMAT = result::"$1" [message] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Message">(.*)<\/col> FORMAT = message::"$1" [module] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Module">(.*)<\/col> FORMAT = module::"$1" [class] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Class">(.*)<\/col> FORMAT = class::"$1" [method] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Method">(.*)<\/col> FORMAT = method::"$1" [source_file] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SourceFile">(.*)<\/col> FORMAT = source_file::"$1" [source_line] DEST_KEY = MetaData:Sourcetype REGEX = <col name="SourceLine">(.*)<\/col> FORMAT = source_line::"$1" [severity] DEST_KEY = MetaData:Sourcetype REGEX = <col name="Severity">(.*)<\/col> FORMAT = severity::"$1" [error_id] DEST_KEY = MetaData:Sourcetype REGEX = <col name="ErrorId">(.*)<\/col> FORMAT = error_id::"$1"   Props: [error_profile] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true TRANSFORMS-set = log_time, time_span, thread_id, user, http_session_id, http_forward, session_guid, session_id, datasource, app_pool_name, ip_address, machine_name, result, message, module, class, method, source_file, source_line, severity, error_id
In DB Studio, is it possible to have a page that uses a submit button (so the panels don’t auto query on every filer change), but also have the dropdown filters dynamically cascade so that their retu... See more...
In DB Studio, is it possible to have a page that uses a submit button (so the panels don’t auto query on every filer change), but also have the dropdown filters dynamically cascade so that their returns will reflect only what is applicable in relation to the other filters without requiring submit to be pressed. Accomplishing this pretty simple without the Submit Button, but team requires a submit button on DBs to save in processing.
Hi, I am trying to build a correlation that matches traffic to threat intel to figure out if it has been blocked or not. It looks like this: Threat intel -> provides only information that given IP... See more...
Hi, I am trying to build a correlation that matches traffic to threat intel to figure out if it has been blocked or not. It looks like this: Threat intel -> provides only information that given IP is malicious and recommends blocking. Thats it. Traffic logs -> provides info on traffic that actually happened, both incoming and outgoing. Lets say that threat intel tells me that ip 1.2.3.4 is malicious. I do the following search:  `index=all_traffic sourcetype=traffic "1.2.3.4"` It shows me the IP in half/half src and dest fields, and other one is my own ip, as well as if this traffic has been dropped or not in a field called action I need to automate this process to show me traffic containing suspicious ip. I tried to join the following way: `[...getting data from threat intel...] | table suspicious_ip | join suspicious_ip [ search index=all_traffic sourcetype=traffic  | rename src as suspicious_traffic ]` This works only sometimes and for half of the cases, as it only looks as src and not dest. And it gets truncated because of subsearch limit. I also tried to connect them via parenthesis like this: `( from datamodel:"Threat_Intel"."Threat_Activity" ) AND (index=all_traffic sourcetype=traffic ) | stats values(*) as * by src, dest` But this didn't work because I cannot use from datamodel this way. My ideal output would be: suspicious_ip src dest action Any ideas?
Hello I am pulling data from a MS SQL Server database via App DB Connect. I have an UTC timestamp field in the returned dataset, which I map to Splunk's TIMESTAMP column to have it for the _time f... See more...
Hello I am pulling data from a MS SQL Server database via App DB Connect. I have an UTC timestamp field in the returned dataset, which I map to Splunk's TIMESTAMP column to have it for the _time field. Splunk vrs is 8.2, DB Connect is 3.6.0 The problem: The Splunk's _time field shows wrong hour, the difference to my local time vs UTC. Question: How do I tell Splunk (or DB Connect) that the incoming timestamp field is an UTC one? best regards Altin
I have a splunk container running on docker, and was hoping to translate the splunk index data into json using a cli search and saving the output as a local file. How to do this? Thanks in advance!
I've deployed below props to  extract the time splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (12) character... See more...
I've deployed below props to  extract the time splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (12) characters of event. Defaulting to timestamp of previous event.  please refer to the below log    Hounsaya     add_user      4               Thu Sep 15 10:09 - 26:39 (60+00:47)   Can you please help and let me know if i need to make any changes?  
Hi,  I have data like A-001, A-002, A-003..... I would like to know how to extract the numbers from these strings: 001, 002, 003....so that I can generate an alert: every three batche (001, 004, ... See more...
Hi,  I have data like A-001, A-002, A-003..... I would like to know how to extract the numbers from these strings: 001, 002, 003....so that I can generate an alert: every three batche (001, 004, 007) should be check. Can someone help me with that? Thanks1   Regards, Tong 
Hello everyone, Please, I need to extract a field named product (with its value in bold) from the below Message field values, and a field named status (with its value in italics): Message="Produc... See more...
Hello everyone, Please, I need to extract a field named product (with its value in bold) from the below Message field values, and a field named status (with its value in italics): Message="Product: Microsoft SQL Server 2019 LocalDB -- Installation completed successfully." Message="Product: Microsoft OLE DB Driver for SQL Server -- Installation completed successfully." Thank you in advance.
Reference : https://zpettry.com/cybersecurity/splunk-queries-data-exfiltration/ | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(byte... See more...
Reference : https://zpettry.com/cybersecurity/splunk-queries-data-exfiltration/ | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h") | eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2) | fields - maxtime per_source* avg* stdev* if you guys can decode this and let me know what is going on in this, especially with time what calculation is it doing with time could be helpful.
So I've got a dashboard about failed login attempts and I want to have a searchable function as well. The searchable function will have four separate search boxes, which will be "Username", "User ID"... See more...
So I've got a dashboard about failed login attempts and I want to have a searchable function as well. The searchable function will have four separate search boxes, which will be "Username", "User ID", "Email Address" and "IP Address". I want it to so that when you search for the username in the username search box, it will retrieve the Username in the username tile, User ID in the user id tile, Email Address in the email address tile and IP Address in the ip address tile. Likewise if you were to search for any of those other things it would return all those results. Just as a note, the username can be either an email address or a string of numbers and letter My code looks like this so far: IP Address Tile:   index=keycloak ipAddress=$ipaddress$ | table ipAddress     Username Tile:    index=keycloak username=$username$ AND username!="*@*" | dedup userId | fields username | table username     Any help with this would be great
How to integrate Splunk DB Connect App with Azure SQL database. I need to create inputs(Query) and ingest the results from Azure database into the Splunk. Can we use Splunk DB Connect App to do the ... See more...
How to integrate Splunk DB Connect App with Azure SQL database. I need to create inputs(Query) and ingest the results from Azure database into the Splunk. Can we use Splunk DB Connect App to do the same ? If yes, exactly what all drivers I need for that ? Again, thanks in advance Tarun   
We have implement a Splunk App (Modular Input with Splunk Python SDK) which is running all the time. In the beginning the app received a token ("session key") which is used during the app execution t... See more...
We have implement a Splunk App (Modular Input with Splunk Python SDK) which is running all the time. In the beginning the app received a token ("session key") which is used during the app execution to access services such as Splunk KV Store and Passwords store. Since this application is running all the time, there is a question regarding expiration of the token (session key again). So, does this token expire? In case the token does expire what method can the application use in order to refresh this token via the Splunk Python SDK?
Hi,   I have an index where all the data is in JSON format and I am struggling to use Regex or spath to extract the fields: I have used this but I got it from Google and I do not understa... See more...
Hi,   I have an index where all the data is in JSON format and I am struggling to use Regex or spath to extract the fields: I have used this but I got it from Google and I do not understand why "msi" is being used: index=ABC sourcetype=DEF | rex field=_raw "(?msi)(?<appNegated>\{.+\}$)" | spath input=appNegated How can I extract each of these JSON-embedded fields as Splunk fields to be then used in a dashboard??? Many thanks!
I am using Java agent to push logs to Splunk Observability but getting 404 on valid credentials. https://github.com/signalfx/splunk-otel-java I have provided the below env variables, still the agen... See more...
I am using Java agent to push logs to Splunk Observability but getting 404 on valid credentials. https://github.com/signalfx/splunk-otel-java I have provided the below env variables, still the agent gives a 404 error.           ENV SPLUNK_ACCESS_TOKEN=<access-token> ENV SPLUNK_REALM=jp0           Errors:           [OkHttp https://ingest.jp0.signalfx.com/...] WARN io.opentelemetry.exporter.internal.grpc.OkHttpGrp cExporter - Failed to export logs. Server responded with HTTP status code 404. Error message: Not Found​          
Hi, We use the threat intelligence app within Enterprise security and use the local IP intel csv (local_ip_intel.csv)  file to upload threats that we find from our threat hunting.  Currently we do ... See more...
Hi, We use the threat intelligence app within Enterprise security and use the local IP intel csv (local_ip_intel.csv)  file to upload threats that we find from our threat hunting.  Currently we do not remove old entries from this list just append new ones to the end of it. I am wondering if once the file has been uploaded should we remove the rows of data from the CSV file to start the clock on the expiration of the threat from ES or does it not matter? I do not want to sinkhole the file as it is a default file provided as part of the app so am not sure how it would react. Thanks
Hi Splunkers, I'm trying to use ITSI to monitor my Windows intrastructure. I used the data collection script (generated by ITSI) to automatically install and configure the splunk forwarder on a t... See more...
Hi Splunkers, I'm trying to use ITSI to monitor my Windows intrastructure. I used the data collection script (generated by ITSI) to automatically install and configure the splunk forwarder on a test windows 2019 server. I  see the data stream coming from the test server to indexer. The entity is correctly created but both the Sample service and base searches created with the "Data Integrations -> Monitoring MS Windows" doesn't work. For what I understood until now there is a mismatch between the sourcetype assinged by forwarder and the one used in base searches.  sourcetype=perfmonMK:LogicalDisk (in base search) vs sourcetype=PerfmonMetrics:LogicalDisk (in indexed data) Is there someone else with the same issue? Could be a bug? Any tips to fix ?  
Hi everyone, I am doing a dashboard in which I'm getting date from Postgresql using dbxquery. Currently I'm filtering date range by putting it in WHERE of SQL query inside the dbxquery. However... See more...
Hi everyone, I am doing a dashboard in which I'm getting date from Postgresql using dbxquery. Currently I'm filtering date range by putting it in WHERE of SQL query inside the dbxquery. However I want to let users choose date range by themselve, for example by creating 2 input boxes in the dashboard and let them enter the time start and end that they want, the data then will be updated corresponding.  The datetime format that I'm getting from postgresql is yyyy-mm-dd HH:MM:SS I created an input with a token, but I don't know how to use that token inside my query. How can I do it please? Or if you have any suggestion for this case, feel free to tell me.   Thanks, Julia
Hello everyone, I'd appreciate if anyone could step in to help me with an unclarity that I have. For use cases (anything in the Enterprise Security > content),  I have found out that for the NEW ... See more...
Hello everyone, I'd appreciate if anyone could step in to help me with an unclarity that I have. For use cases (anything in the Enterprise Security > content),  I have found out that for the NEW correlation searches that will be created I can use macros or eventtypes/tags in my correlation search to address all existing source types AND new source types that might be onboarded to have all my use cases (CSs up to date). Could someone explain, how is this working with the content that comes by default with Enterprise Security? How do those out-of-the-box correlation searches (saved searches and all the others) know how to look into data from my source types if the source types aren't specified?  Thank you in advance to anyone that will take they time to make this clear to me