All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two sources that have a common field (user) and am currently using transaction to join the user_a with the source_b_field. This query works fine.   index=index_a (sourcetype=source_a OR sour... See more...
I have two sources that have a common field (user) and am currently using transaction to join the user_a with the source_b_field. This query works fine.   index=index_a (sourcetype=source_a OR sourcetype=source_b) | transaction startswith="string_start" endswith="string_end" maxspan=1s maxevents=2 | where (user_a = user_b) | stats count by user_a, source_b_field   I figured it would be easy enough to use stats instead to increase execution efficiency  but I can't seem to get it quite right. The issue is that I need a left/inner join instead of a full join as I am only looking for users from source_a. Here is the stats query that essentially just returns data from source_b as source_a is a subset of source_b.   index=index_a (sourcetype=source_a OR sourcetype=source_b) | user_a=if(sourcetype=="source_b",user_b,user_a) | stats count by user_a, source_b_field   Is there a way to join user_a with source_b_field via stats? I feel that I am missing something obvious.
Hello, I have installed the File/Directory Information Input add-on to a Windows endpoint to test monitoring certain files. The Windows host has Python 2.x installed and I am having Splunk monitor a... See more...
Hello, I have installed the File/Directory Information Input add-on to a Windows endpoint to test monitoring certain files. The Windows host has Python 2.x installed and I am having Splunk monitor a "test.txt" file under C:\Program Files\SplunkUniversalForwarder - so we theoretically shouldn't have any permission issues - please correct me if I am wrong. The only change to the add-on was creating a local directory with a inputs.conf file: [file_meta_data://default] file_path = C:\Program Files\SplunkUniversalForwarder interval = 1m recurse = 1 only_if_changed = 1 include_file_hash = 0 file_hash_limit = 500MB file_filter= test index = main I am seeing no logs, whats also interesting is that I am not seeing the file_meta_data_modular_input.log file either. Where am I going wrong? 
I need to ingest data for a Cisco ISE server, but I have had to deal with a protocol called "PxGrid" that, according to what I have been informed, allows a bidirectional communication between Splunk ... See more...
I need to ingest data for a Cisco ISE server, but I have had to deal with a protocol called "PxGrid" that, according to what I have been informed, allows a bidirectional communication between Splunk and Cisco ISE allowing devices to be blocked from a Splunk Dasboard, ip, among other functions. I understand this from the operation of the 2 available APPs, and it is my first question to confirm if I am right or wrong: Splunk Add-on for Cisco Identity Services: I understand that it is the one that allows data ingestion through sysog Splunk for Cisco Identity Services (ISE): Dashboards and Reports   I do not know completely what this "PxGrid" protocol does, what I would like to know is: 1. PxGrid is still supported by Splunk or is it no longer supported? 2. Is it true that devices can be blocked from a Dashboard to be reflected in the Cisco ISE as an automation process? 3. I am working in a centralized architecture where in a single server I have the syslog ingestion, indexing and search head, what should I take into account when making this implementation of Cisco ISE - Splunk?   4. With these notes I understand that this functionality is no longer available https://docs.splunk.com/Documentation/AddOns/released/CiscoISE/Releasenotes   thanks if someone can help me  
I have a log with the following entries among others and I am looking for a way to display the top 2 times by each action. Calculated ABC. Action took 100 milliseconds Calculated XYZ. Action too... See more...
I have a log with the following entries among others and I am looking for a way to display the top 2 times by each action. Calculated ABC. Action took 100 milliseconds Calculated XYZ. Action took 122450 milliseconds Calculated ABC. Action took 10 milliseconds Calculated XYZ. Action took 67543 milliseconds Calculated ABC. Action took 11 milliseconds Calculated XYZ. Action took 5 milliseconds Calculated ABC. Action took 600 milliseconds   I can extract the fields just fine using regex and can display the entry with max time by action using the below search source="*test.log*" "Calculated" | rex field=_raw "^.*Calculated (?<ACTION>.+)" | rex field=_raw "^.*Action took (?<DURATION>.+) milliseconds" | stats max(DURATION) by ACTION   ACTION DURATION ABC 600 XYZ 122450   However I'm lost as to how to get the top 2 transactions reported like below ACTION DURATION ABC 600   100 XYZ 122450   67453
I'm trying to figure out why I get a 404 error in setup after I move a TA from my dev to prod environment.  The TA works in dev.  I tar ball it and move it to prod.  Before turning it on in prod I re... See more...
I'm trying to figure out why I get a 404 error in setup after I move a TA from my dev to prod environment.  The TA works in dev.  I tar ball it and move it to prod.  Before turning it on in prod I remove local/passwords.conf.  I then start the TA.  When I go to the TA setup I get a 404 not found error.  I look in dev:8089 and find the information.  It isn't in prod:8089. To me it seems there should be a process that reads setup.xml at install, translates the information into Rest API entries, and runs the commands to enter the information into the back end.  If anything fails it should generate an error.  I couldn't find an error. Is there something I'm missing? TIA
Hi. The topic is probably already hackneyed, but I'll ask you anyway. Classic case - the user has not logon for more than 90 days. I want to make a request through - ldapsearch, With enrichment th... See more...
Hi. The topic is probably already hackneyed, but I'll ask you anyway. Classic case - the user has not logon for more than 90 days. I want to make a request through - ldapsearch, With enrichment through blood pressure AD.   There is an example request https://docs.splunksecurityessentials.com/content-detail/old_passwords/ Took from the request only - there were no login for more than 90 days. | ldapsearch search="(&(objectclass=user)(!(objectClass=computer)))" attrs="sAMAccountName,pwdLastSet,lastLogonTimestamp,whenCreated,badPwdCount,logonCount" | fields - _raw host _time | convert timeformat="%Y-%m-%dT%H:%M:%S.%6QZ" mktime(lastLogonTimestamp) | convert timeformat="%Y%m%d%H%M%S.0Z" | where lastLogonTimestamp > relative_time(now(), "-90d") | convert ctime(lastLogonTimestamp) The request is processed and takes away attributes from AD, but the time of the last login-lastLogonTimestamp does not show the former not 90 days. Where is the error in the request ?
Hi All! How to correlate events from PaloAlto VPN logs and Windows authentication per user, comparing src_ip and machine_name? - Identify user and internal IP that the workstation received. - Co... See more...
Hi All! How to correlate events from PaloAlto VPN logs and Windows authentication per user, comparing src_ip and machine_name? - Identify user and internal IP that the workstation received. - Correlate through the internal IP which user is authenticated on the respective workstation. If different, trigger alert and send email.   Eg vpn access log Feb 17 13:58:01 server.pa01 1,2021/02/17 13:58:00,011901013191,GLOBALPROTECT,0,2305,2021/02/17 13:58:00,vsys1,gateway-connected,connected,,IPSec,domain\user.a1,BR,NOTE01,192.168.93.210,0.0.0.0,10.10.1.10,0.0.0.0,es11-3120-f2g9-g4e7,NOTE01,5.1.5,Windows,"Microsoft Windows 10 Pro , 64-bit",1,,,"",success,,0,,0,SSLVPN,3533509,0x0   Eg Windows authentication log: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{24345625-6264-3934-2E362B28D20C}'/><EventID>4624</EventID><Version>1</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2021-02-17T16:21:26.693248600Z'/><EventRecordID>1195483947</EventRecordID><Correlation/><Execution ProcessID='736' ThreadID='13684'/><Channel>Security</Channel><Computer>DC01.net</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NULL SID</Data><Data Name='SubjectUserName'>-</Data><Data Name='SubjectDomainName'>-</Data><Data Name='SubjectLogonId'>0x0</Data><Data Name='TargetUserSid'>domain\user.a1</Data><Data Name='TargetUserName'>user.a1</Data><Data Name='TargetDomainName'>domain</Data><Data Name='TargetLogonId'>0x395adc303</Data><Data Name='LogonType'>3</Data><Data Name='LogonProcessName'>NtLmSsp </Data><Data Name='AuthenticationPackageName'>NTLM</Data><Data Name='WorkstationName'>NOTE01</Data><Data Name='LogonGuid'>{00000000-0000-0000-0000-000000000000}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>NTLM V2</Data><Data Name='KeyLength'>128</Data><Data Name='ProcessId'>0x0</Data><Data Name='ProcessName'>-</Data><Data Name='IpAddress'>10.10.1.10</Data><Data Name='IpPort'>49191</Data><Data Name='ImpersonationLevel'>%%1833</Data></EventData></Event>   Thanks in advanced!
Hey All, I am trying to pull the username from the following event which is everything after the Rightnetworks\ in the event. Also to complicate things It could be a name or a set of numbers or a na... See more...
Hey All, I am trying to pull the username from the following event which is everything after the Rightnetworks\ in the event. Also to complicate things It could be a name or a set of numbers or a name with numbers in it. Any help is apperciated. here are some example events: 02/17/2021 11:45:19 AM LogName=Microsoft-Windows-TerminalServices-LocalSessionManager/Operational SourceName=Microsoft-Windows-TerminalServices-LocalSessionManager EventCode=25 EventType=4 Type=Information ComputerName=BPSQCP03S11.rightnetworks.com User=NOT_TRANSLATED Sid=S-1-5-18 SidType=0 TaskCategory=None OpCode=Info RecordNumber=1079076 Keywords=None Message=Remote Desktop Services: Session reconnection succeeded: User: RIGHTNETWORKS\465714 Session ID: 350 Source Network Address: 184.97.224.236   02/17/2021 11:45:18 AM LogName=Microsoft-Windows-TerminalServices-LocalSessionManager/Operational SourceName=Microsoft-Windows-TerminalServices-LocalSessionManager EventCode=25 EventType=4 Type=Information ComputerName=RNVSASP217.rightnetworks.com User=NOT_TRANSLATED Sid=S-1-5-18 SidType=0 TaskCategory=None OpCode=Info RecordNumber=1064633 Keywords=None Message=Remote Desktop Services: Session reconnection succeeded: User: RIGHTNETWORKS\veronicagutierrez Session ID: 342 Source Network Address: 216.67.212.82
Greetings!! I need help on the below message error that pops up in the message and how I can fix it, here's the message; Search peer indexer2 has the following message: Too many bucket replication... See more...
Greetings!! I need help on the below message error that pops up in the message and how I can fix it, here's the message; Search peer indexer2 has the following message: Too many bucket replication errors to target peer=x.x.x.x:8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of the target peer. if this condition persists, you can temporarily put that peer in manual detention. Search peer indexer3 has the following message: Too many bucket replication errors to target peer=x.x.x.x:8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of the target peer. if this condition persists, you can temporarily put that peer in manual detention. Search peer indexer1 has the following message: Too many bucket replication errors to target peer=x.x.x.x:8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of the target peer. if this condition persists, you can temporarily put that peer in manual detention. NB: I also see that "Search Factor is Not Met" and "Replication Factor is Not Met" are displaying red warning error? I have also checked the networking is working well, help me to understand what was the problem? and how to fix this?  Thank you in advance.     
We have an issue where certain HEC feeds will burst in volume and blow out our daily ingest license. Is there an automated way to shut off the offending HEC when volume reaches a high level? 
Hello, I have an issue with Endpoint Datamodel while using Enterprise Security. Specifically I am running:     |rest splunk_server=local /services/datamodel/acceleration |fields title search   ... See more...
Hello, I have an issue with Endpoint Datamodel while using Enterprise Security. Specifically I am running:     |rest splunk_server=local /services/datamodel/acceleration |fields title search     Every datamodel has a search string populated except Endpoint. Is there an explanation for that?  Thank you in advance. Regards, Chris
Good Morning As I am new to Splunk,  sometimes I need to try things that are beyond my comprehension at this time.  This is one of those cases: I have the following search that lists the hosts wit... See more...
Good Morning As I am new to Splunk,  sometimes I need to try things that are beyond my comprehension at this time.  This is one of those cases: I have the following search that lists the hosts with system information: index="index1" OR index="index2" sourcetype=WinHostMon (source=operatingsystem os="*" TotalPhysicalMemoryKB="*") OR (source=processor NumberOfProcessors="*") OR (source=disk DriveType=fixed TotalSpaceKB) | eval RAM = round (((TotalPhysicalMemoryKB)/1000000),1) | eval DiskSpace = round (((TotalSpaceKB)/1000000),1) | stats values(os) as OS, values(NumberOfProcessors) as CPU, values(RAM) as "RAM (GB)", values(DiskSpace) as TotalDiskSpace by host | eventstats sum(TotalDiskSpace) as "LogicalDiskSpace (GB)" by host | table host, OS, CPU, "RAM (GB)", "LogicalDiskSpace (GB)" ==================================================== I need to add an inputlookup command to display other fields associated to each host that is displayed in the search above. I have setup the input lookup table and the definition and I am able to run the lookup and extract the fields i need. | inputlookup otherinfo.csv host   field1    field2    field3 The difficult part that I have been struggling with is trying to add that step into the search above. Any guidance or information that can be provided to help me learn would be appreciated. Thank you
I have a entity navigation configured for a custom entity. I want to pass the name of the selected KPI as a URL parameter to the dashboard associated with the entity navigation. How can I do this as ... See more...
I have a entity navigation configured for a custom entity. I want to pass the name of the selected KPI as a URL parameter to the dashboard associated with the entity navigation. How can I do this as selected KPI is not part of entity information.
I'm using a VM.   I did install the software correctly and I had to change the default port (8000) to the port 80. I cant access the interface and when I make a curl command the answer is this:  ... See more...
I'm using a VM.   I did install the software correctly and I had to change the default port (8000) to the port 80. I cant access the interface and when I make a curl command the answer is this:  "curl http://analytics_splunk:80 <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=http://analytics_splunk/en-US/"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="http://analytics_splunk/en-US/">here</a>.</p></body></html> [root@analytics_splunk bin]# "http://analytics_splunk/en-US/ "   I'd really need some help
Hi All, Just for testing purpose I changed the _internal index storage size as 50MB to see how bucket concept works. I had last one month of data in hot bucket. Once I changed the size few data whe... See more...
Hi All, Just for testing purpose I changed the _internal index storage size as 50MB to see how bucket concept works. I had last one month of data in hot bucket. Once I changed the size few data where deleted from this month itself. But  a very old data was not deleted. can anyone explain Why it behaves like this? And the deleted data wasn't moved to Cold Bucket why?      
I'm working on system whereby Vulnerability Analysis (VA) scanners are polling various LAN segments, which is resulting in garbage data being ingested by indexers legitimately listening on 9997, as t... See more...
I'm working on system whereby Vulnerability Analysis (VA) scanners are polling various LAN segments, which is resulting in garbage data being ingested by indexers legitimately listening on 9997, as the VA scanners do their job and try to identify known vulnerabilities. inputs.conf caters for this and allows a list of IP's to block followed by an accept all, and quoting the example: * You can also prefix an entry with '!' to cause the rule to reject the connection. The input applies rules in order, and uses the first one that matches. For example, "!10.1/16, *" allows connections from everywhere except the 10.1.*.* network.   In the instance that I am working on there are ~40 individual /32 IP's before the * catch all permit in the format below (I have obfuscated the actual IP's for placeholders for the purpose of publishing here). [default] acceptFrom = "!10.1.x.x, !10.2.x.x, !10.3.x.x, *"   However this has not worked and all traffic is being dropped which can be seen via a simple UI search index=_internal host=xxxxx log_level=WARN component=TcpInputProc It reliably tells me data is rejected due to acceptFrom. This leads me to two possible solutions: A: My syntax is incorrect and the " are not required, despite being shown in the example. B: The excessive length of blocked IP's may be too long to be processed, but I have no way of knowing what the max length is? C: Community answer... My next steps beyond the community will be to try to recreate in a lab instance and experiment with altered syntax.
Hello, I have an Enterprise instance in an EC2 instance (all in one box, free trial) and trying to get CloudTrail logs to it using the "Splunk Add-on for AWS" (S3 Bucket > Event Notification > SNS > ... See more...
Hello, I have an Enterprise instance in an EC2 instance (all in one box, free trial) and trying to get CloudTrail logs to it using the "Splunk Add-on for AWS" (S3 Bucket > Event Notification > SNS > SQS > EC2 Instance with IAM Role ). In the logs from _internal I see that the files are picked up from S3 ( message="Wrote data to STDOUT success.", message="Sent data for indexing.",  message="Delete SQS message" etc. ) but then I get only these messages:  message="No data input has been configured, exiting..." and  message="Not data collection tasks for aws_description is discovered. Doing nothing and quitting the TA.". The CloudTrail logs do not show up in main indexer or anywhere else so everything is lost somewhere after this <<message="Sent data for indexing.">> Again, everything is in one box in EC2 (Splunk Enterprise free trial). If anyone has a solution to this, it would be greatly appreciated, thanks!
I am fairly new to splunk and still learning. I have a splunk event which is a mix of some texts and json in between. (This isn't the complete log)       2021-02-14 00:00:03,596 [[bapm2DQ].bapm... See more...
I am fairly new to splunk and still learning. I have a splunk event which is a mix of some texts and json in between. (This isn't the complete log)       2021-02-14 00:00:03,596 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.RetrieveDataFromDQ - Total Application assets -> 1692 2021-02-14 00:00:03,596 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.CommonUtils - {"Header":{"AppId":"AD00006933","Type":"Inbound","RecId":"416c627c-41a7-428e-a871-5317c4842fe5","StartTS":"2021-02-14T05:00Z","Ver":"2.0.0"},"Application":{"APP_OS":"Linux 3.10.0-1160.11.1.el7.x86_64","APP_Runtime":"Java 1.8.0_282","APP_AppName":"DQ-bapm-Integration","APP_AppVersion":"1.0.0","Host":"zebra.cdc.growl.com/10.102.180.53","Channel":"Other"},"Service":{"Key":"DQ2bapm","URL":"https://growl-test.DQ.com/rest/2.0/assets?limit=1000&offset=1000&typeId=00000000-0000-0000-0000-000000031302&communityId=595b27d3-ff42-45e4-8dc7-0172f7d82693&domainId=2c8b39ea-0d7f-445f-acc2-a1fb3a9a12db&statusId=00000000-0000-0000-0000-000000005009","CallType":"REST","Operation":"GET"},"Results":{"Elapsed":"0","Message":"Invoking DQ REST API","TraceLevel":"DEBUG"},"Security":{"Vendor":"growl"}} 2021-02-14 00:00:03,795 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.RetrieveDataFromDQ - Total Application assets -> 1692 2021-02-14 00:00:03,795 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.RetrieveDataFromDQ - Total Application assets in appAssetList-> 1692 2021-02-14 00:00:04,499 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.ComparebapmDQRecords - List of Applications in DQ to be marked "Obsolete in bapm": [AD00007661, AD00007470, AD00007539, AD00007549, AD00007643] 2021-02-14 00:00:04,499 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.ComparebapmDQRecords - ## Total Application count from bapm ##1696 2021-02-14 00:00:04,499 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.ComparebapmDQRecords - ## Total Application Asset in DQ ##1692 2021-02-14 00:00:04,499 [[bapm2DQ].bapmprojectFlow.stage1.02] INFO com.growl.hdt.dmt.DQ.bapm.ComparebapmDQRecords - ## No of Application to Obsolete in DQ ##5       How can I extract the below : List of Applications in DQ to be marked "Obsolete in bapm": [AD00007661, AD00007470, AD00007539, AD00007549, AD00007643] Total Application count from bapm ##1696 Total Application Asset in DQ ##1692 No of Application to Obsolete in DQ ##5
Hi Team, We want to know how to enable an alert notification in such as if anyone creates or deletes a Field Extraction or Lookups file attached into the Search Head  then we need to be alerted with... See more...
Hi Team, We want to know how to enable an alert notification in such as if anyone creates or deletes a Field Extraction or Lookups file attached into the Search Head  then we need to be alerted with who has created it and at what time also what course of action has been taken by the user. So it will be very useful for the team to monitor. Hence anyone kindly help me with our requirement.  
Hello friends, Please try to assist me. My data structure is - Date , field1 , field2 , field3 I need to search events that contain a specific value in field2 ,  based on the results display all ... See more...
Hello friends, Please try to assist me. My data structure is - Date , field1 , field2 , field3 I need to search events that contain a specific value in field2 ,  based on the results display all the events that contain a common value of field1. Example - 17/2 AAA BBB gfg 17/2 XXX VVV hjh 17/2 AAA MMM klk Searching BBB will display this lines (that have AAA in common) - 17/2 AAA BBB gfg 17/2 AAA MMM klk Help will be appreciated, Thank you.