All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Reference : https://zpettry.com/cybersecurity/splunk-queries-data-exfiltration/ | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(byte... See more...
Reference : https://zpettry.com/cybersecurity/splunk-queries-data-exfiltration/ | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h") | eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2) | fields - maxtime per_source* avg* stdev* if you guys can decode this and let me know what is going on in this, especially with time what calculation is it doing with time could be helpful.
So I've got a dashboard about failed login attempts and I want to have a searchable function as well. The searchable function will have four separate search boxes, which will be "Username", "User ID"... See more...
So I've got a dashboard about failed login attempts and I want to have a searchable function as well. The searchable function will have four separate search boxes, which will be "Username", "User ID", "Email Address" and "IP Address". I want it to so that when you search for the username in the username search box, it will retrieve the Username in the username tile, User ID in the user id tile, Email Address in the email address tile and IP Address in the ip address tile. Likewise if you were to search for any of those other things it would return all those results. Just as a note, the username can be either an email address or a string of numbers and letter My code looks like this so far: IP Address Tile:   index=keycloak ipAddress=$ipaddress$ | table ipAddress     Username Tile:    index=keycloak username=$username$ AND username!="*@*" | dedup userId | fields username | table username     Any help with this would be great
How to integrate Splunk DB Connect App with Azure SQL database. I need to create inputs(Query) and ingest the results from Azure database into the Splunk. Can we use Splunk DB Connect App to do the ... See more...
How to integrate Splunk DB Connect App with Azure SQL database. I need to create inputs(Query) and ingest the results from Azure database into the Splunk. Can we use Splunk DB Connect App to do the same ? If yes, exactly what all drivers I need for that ? Again, thanks in advance Tarun   
We have implement a Splunk App (Modular Input with Splunk Python SDK) which is running all the time. In the beginning the app received a token ("session key") which is used during the app execution t... See more...
We have implement a Splunk App (Modular Input with Splunk Python SDK) which is running all the time. In the beginning the app received a token ("session key") which is used during the app execution to access services such as Splunk KV Store and Passwords store. Since this application is running all the time, there is a question regarding expiration of the token (session key again). So, does this token expire? In case the token does expire what method can the application use in order to refresh this token via the Splunk Python SDK?
Hi,   I have an index where all the data is in JSON format and I am struggling to use Regex or spath to extract the fields: I have used this but I got it from Google and I do not understa... See more...
Hi,   I have an index where all the data is in JSON format and I am struggling to use Regex or spath to extract the fields: I have used this but I got it from Google and I do not understand why "msi" is being used: index=ABC sourcetype=DEF | rex field=_raw "(?msi)(?<appNegated>\{.+\}$)" | spath input=appNegated How can I extract each of these JSON-embedded fields as Splunk fields to be then used in a dashboard??? Many thanks!
I am using Java agent to push logs to Splunk Observability but getting 404 on valid credentials. https://github.com/signalfx/splunk-otel-java I have provided the below env variables, still the agen... See more...
I am using Java agent to push logs to Splunk Observability but getting 404 on valid credentials. https://github.com/signalfx/splunk-otel-java I have provided the below env variables, still the agent gives a 404 error.           ENV SPLUNK_ACCESS_TOKEN=<access-token> ENV SPLUNK_REALM=jp0           Errors:           [OkHttp https://ingest.jp0.signalfx.com/...] WARN io.opentelemetry.exporter.internal.grpc.OkHttpGrp cExporter - Failed to export logs. Server responded with HTTP status code 404. Error message: Not Found​          
Hi, We use the threat intelligence app within Enterprise security and use the local IP intel csv (local_ip_intel.csv)  file to upload threats that we find from our threat hunting.  Currently we do ... See more...
Hi, We use the threat intelligence app within Enterprise security and use the local IP intel csv (local_ip_intel.csv)  file to upload threats that we find from our threat hunting.  Currently we do not remove old entries from this list just append new ones to the end of it. I am wondering if once the file has been uploaded should we remove the rows of data from the CSV file to start the clock on the expiration of the threat from ES or does it not matter? I do not want to sinkhole the file as it is a default file provided as part of the app so am not sure how it would react. Thanks
Hi Splunkers, I'm trying to use ITSI to monitor my Windows intrastructure. I used the data collection script (generated by ITSI) to automatically install and configure the splunk forwarder on a t... See more...
Hi Splunkers, I'm trying to use ITSI to monitor my Windows intrastructure. I used the data collection script (generated by ITSI) to automatically install and configure the splunk forwarder on a test windows 2019 server. I  see the data stream coming from the test server to indexer. The entity is correctly created but both the Sample service and base searches created with the "Data Integrations -> Monitoring MS Windows" doesn't work. For what I understood until now there is a mismatch between the sourcetype assinged by forwarder and the one used in base searches.  sourcetype=perfmonMK:LogicalDisk (in base search) vs sourcetype=PerfmonMetrics:LogicalDisk (in indexed data) Is there someone else with the same issue? Could be a bug? Any tips to fix ?  
Hi everyone, I am doing a dashboard in which I'm getting date from Postgresql using dbxquery. Currently I'm filtering date range by putting it in WHERE of SQL query inside the dbxquery. However... See more...
Hi everyone, I am doing a dashboard in which I'm getting date from Postgresql using dbxquery. Currently I'm filtering date range by putting it in WHERE of SQL query inside the dbxquery. However I want to let users choose date range by themselve, for example by creating 2 input boxes in the dashboard and let them enter the time start and end that they want, the data then will be updated corresponding.  The datetime format that I'm getting from postgresql is yyyy-mm-dd HH:MM:SS I created an input with a token, but I don't know how to use that token inside my query. How can I do it please? Or if you have any suggestion for this case, feel free to tell me.   Thanks, Julia
Hello everyone, I'd appreciate if anyone could step in to help me with an unclarity that I have. For use cases (anything in the Enterprise Security > content),  I have found out that for the NEW ... See more...
Hello everyone, I'd appreciate if anyone could step in to help me with an unclarity that I have. For use cases (anything in the Enterprise Security > content),  I have found out that for the NEW correlation searches that will be created I can use macros or eventtypes/tags in my correlation search to address all existing source types AND new source types that might be onboarded to have all my use cases (CSs up to date). Could someone explain, how is this working with the content that comes by default with Enterprise Security? How do those out-of-the-box correlation searches (saved searches and all the others) know how to look into data from my source types if the source types aren't specified?  Thank you in advance to anyone that will take they time to make this clear to me
Hi All, I have a search which parses key/value pairs out of a strangely-formatted XML field.         rex field=xml "<N>(?<field_name>(.*?))</N><V>(?<field_value>(.*?))<" | eval {field_name}... See more...
Hi All, I have a search which parses key/value pairs out of a strangely-formatted XML field.         rex field=xml "<N>(?<field_name>(.*?))</N><V>(?<field_value>(.*?))<" | eval {field_name}=field_value         Above, when there is a single match, this works as expected.  I have the field name and the field value available as a field in my results.   What I don't know how to do, is make this work for multiple matches.   When I run:       rex field=xml max_match=0 "<N>(?<field_name>(.*?))</N><V>(?<field_value>(.*?))<" | eval {field_name}=field_value         Then both field_name and field_value are multi-value fields.  I would like to make each key=value available in the results as I did above.    Can anyone give me a pointer on how to accomplish this?    Thanks.
I wanted to update all 3 alert(critical, major and minor) in same column can any one help in these. @splunk @Anonymous @Splunx @Anonymous 
Intermittent text file data collection is not possible. Initially, it is a collection of csv file data. After that, if you change only a few characters in the csv, you cannot collect them intermi... See more...
Intermittent text file data collection is not possible. Initially, it is a collection of csv file data. After that, if you change only a few characters in the csv, you cannot collect them intermittently. Which part should I check?   - setting [monitor://D:\Space\Config*File\Devicenet_Config.csv] disabled = 0 host = HOST_NAME index = FDC_MainUtility sourcetype = FDCField crcSalt = <SOURCE>
This question gets pretty complicated, and I'm not sure if anyone has needed to ask about this before. I couldn't find anything, so here goes. We have a larger Splunk distributed deployment with E... See more...
This question gets pretty complicated, and I'm not sure if anyone has needed to ask about this before. I couldn't find anything, so here goes. We have a larger Splunk distributed deployment with Enterprise Security and clustered indexers. The Crowdstrike EDR agent has been running on all the systems in "Monitor" (no actions allowed) mode and seems to be doing fine, but now (due to reasons that are too complex to go into here) we need to configure the Crowdstrike policy to allow autonomous responses on those systems. Actions could include terminating connections, terminating running programs, quarantining files, and blocking ports.  I am worried that the indexer cluster (or other critical Splunk features, like the deployment server) could be permanently impacted if one or more of those autonomous actions are taken. We can't *not* configure the agent to allow those actions so at the very least I'd like to be aware of what could break, how, and how I can test for those things. Anyone have any ideas or suggestions?
Hi Team, Please help me in this as i am not able to login to Appdynamics controller getting Login failed error again and again. I was not able to login for the first time itself. Note: Similar... See more...
Hi Team, Please help me in this as i am not able to login to Appdynamics controller getting Login failed error again and again. I was not able to login for the first time itself. Note: Similar to my issue a ticket was raised before as well https://community.appdynamics.com/t5/Licensing-including-Trial/during-login-it-shows-login-failed/td-p/46172/page/2 Thanks & Regards, Sachin Agarwal 9897439324
Dear Community, Do you have any opinions, own experiences with Splunk new feature called Data Manager? What are the advantages and disadvantages of it compared to an add-on (i.e. Splunk Add-on for A... See more...
Dear Community, Do you have any opinions, own experiences with Splunk new feature called Data Manager? What are the advantages and disadvantages of it compared to an add-on (i.e. Splunk Add-on for AWS) to ingest your data to Splunk? As I see this Data Manager just generates CF templates and we have to pull up our machines upon them (pay for these resources to AWS) and push data to Splunk (are we paying for SVC usage - ingestion- here?), while the addon is fully managed by Splunk and is pulling the data from AWS. The addon uses Splunk resources of course (are we paying for SVC usage - ingestion - here?) to pull data until it reaches AWS API limits (if it reaches). So, I would be happy to hear your own, objective experiences about the topic. Does it worth overall to start using Data Manager? (AWS costs + Splunk ingestion costs (?)) VS. (Splunk ingestion costs (?) + API limit) What did you also consider? (Of course I tried googling this topic, but I did not find any good, objective comparison, opinions about it.) Thank you very much for your help!
Hello Splunk ES experts , My Splunkd is crashing frequently with below error in crash logs C++ exception: exception_addr=0x7ff2c2c3c620 typeinfo=0x556c38241c48, name=St9bad_alloc Exception ind... See more...
Hello Splunk ES experts , My Splunkd is crashing frequently with below error in crash logs C++ exception: exception_addr=0x7ff2c2c3c620 typeinfo=0x556c38241c48, name=St9bad_alloc Exception indicates memory allocation failure This started after ES app installation . I have checked with free command I have plenty of  memory on the system 32GB and 16 CPU                total                 used            free                   shared           buff/cache           available Mem: 32750180     1764936    19979208     17744               11006036         31040488 Swap: 4169724          0                   4169724   Below are my ulimit settings  splunk soft nofile 65535 splunk hard nofile 65535 splunk soft nproc 65535 splunk hard nproc 65535 Any suggestions please 
So I'm trying to get all events where val1+val2 are also in another event from the table. In the example below, I would need as output row 0 and row 1, because both val1 and val2 match.  Row 3 and ... See more...
So I'm trying to get all events where val1+val2 are also in another event from the table. In the example below, I would need as output row 0 and row 1, because both val1 and val2 match.  Row 3 and 4 match on val1 but not on val2, and row 1 and 2 match on val2 but not on val1, so those events should get excluded. (Also I need time column to stay as I need to do some other operations with it)   row# time val1 val2 0 YYYY-MM-DD A X 1 YYYY-MM-DD A X 2 YYYY-MM-DD B X 3 YYYY-MM-DD C Y 4 YYYY-MM-DD C X 5 YYYY-MM-DD A Z   To solve this I've been trying:         | foreach val1 [eval test=if(val1+val2=val1+val2, "same", "not")]         or         '<<FIELD>>' = '<<FIELD>>'         But I end up getting with either "not" in all cases,  or "same" in others even tho both values are not actually the same
I am trying to configure the Splunk Add on for Microsoft Azure (version 4.0.2 on a stand alone Heavy Forwarder running version 9.0.1 of splunk, os RHEL 7) and I'm seeing the error below in /opt/splun... See more...
I am trying to configure the Splunk Add on for Microsoft Azure (version 4.0.2 on a stand alone Heavy Forwarder running version 9.0.1 of splunk, os RHEL 7) and I'm seeing the error below in /opt/splunk/var/log/splunk/ta_ms_aad_MS_AAD_audit.log.       2022-09-14 11:41:41,871 ERROR pid=12784 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_audit.py", line 168, in collect_events response = azutils.get_items_batch_session(helper=helper, url=url, session=session) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 119, in get_items_batch_session raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 115, in get_items_batch_session r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://graph.microsoft.com/None/auditLogs/directoryAudits?$orderby=activityDateTime&$filter=activityDateTime+gt+2021-10-01T14:26:12.017133Z+and+activityDateTime+le+2022-09-14T16:34:41.623739Z       On the Azure (Government) side we have the permissions below enabled: AuditLog.Read.All Device.Read.All Directory.Read.All Group.Read.All GroupMember.ReadWrite.All IdentityRiskEvent.Read.All Policy.Read.All Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess SecurityEvents.Read.All User.Read User.Read.All Also, we have a P2 license so that should not be the issue. We have a python script that is able to retrieve signins from Azure using the same credentials we are using for the Splunk Add on for Microsoft Azure. Another thing I noticed is the url in the error message seem wrong. Seems like it should be: https://graph.microsoft.com/v1.0/auditLogs/directoryAudits$orderby=activityDateTime&$filter=activityDateTime+gt+2021-10-01T14:26:12.017133Z+and+activityDateTime+le+2022-09-14T16:34:41.623739Z   A couple of other tidbits. The app works for our commercial tenant. Our government tenant is new and at this point doesn't have any subscriptions. Does anyone know if having more than zero subscriptions is a requirement for this app?
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylook... See more...
I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup: "squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host] What I would like to do is to set up an alert where for each hostname in My_Hostname, Splunk will look for "squirrel". If the Number of Results found is equal to 0 (meaning that the squirrel log was not created) in a 24 hour period, I would like an email sent out with that hostname in the email. I know I can set it up with all hostnames from the lookup, but the issue I see is that if hostname_1 has "squirrel" and hostname_4 does not, it will be greater than 0. I effectively want to know if an application is not running and which host it is not running on. The application will generate "squirrel" at least once in a 24 hour period. (If you don't like squirrels, you can insert your animal of choice here).