All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Can anyone please tell me the steps to Integrate Splunk with the Azure Information Protection service? I am trying to fetch the operational logs from AzureIP to Splunk. I have already tried th... See more...
Hi, Can anyone please tell me the steps to Integrate Splunk with the Azure Information Protection service? I am trying to fetch the operational logs from AzureIP to Splunk. I have already tried the Log Analytics Add-on but it seems like not working. Also have tried the Graph API Add-On which has the AzureIP service being mentioned in the description of Splunk Base, but it is not working as well. Microsoft Log Analytics Add-on (Formerly Known as OMS): https://splunkbase.splunk.com/app/4127/ Microsoft Graph Security API Add-On for Splunk: https://splunkbase.splunk.com/app/4564/ Thank and Regards, Driptarup
I need to create a search to count the number of events in each geographic are of our network. Each geo area will consist of multiple subnets. Kentucky 10.10.10 10.10.11 10.10.12 Ohio 10.10.1... See more...
I need to create a search to count the number of events in each geographic are of our network. Each geo area will consist of multiple subnets. Kentucky 10.10.10 10.10.11 10.10.12 Ohio 10.10.10.20 10.10.10.21 Indiana 10.10.30 10.10.31 10.10.32 10.10.10.33. The report should simply output total by state: Kentucky 112 Ohio 87 Indiana 212 All events have a full IP address but I've already used REX to assign the first 3 octets to the field SUBNET. Thanks
As per the document below: https://docs.splunk.com/Documentation/Splunk/7.3.3/Security/ConfigureSplunkforwardingtousesignedcertificates#Configure_your_forwarders_to_use_your_certificates We have ... See more...
As per the document below: https://docs.splunk.com/Documentation/Splunk/7.3.3/Security/ConfigureSplunkforwardingtousesignedcertificates#Configure_your_forwarders_to_use_your_certificates We have to define sslRootCAPath under server.conf only on Linux machine and not for windows. In windows ,where we mention this attribute?
We are pulling in tenable data using the TA (except for scan results as this appears to require an API for some reason with the more recent versions). I have noted that there is no tag information s... See more...
We are pulling in tenable data using the TA (except for scan results as this appears to require an API for some reason with the more recent versions). I have noted that there is no tag information such as tag=vulnerability or tag=report to use the data models built into Vulnerability Centre. Any thoughts on resolving this?
Hello, Our Horizontal Port Scan correlation search is triggered when a number of request destinations is superior of 10 in 15 minutes at the same port. | tstats `security_content_summariesonly... See more...
Hello, Our Horizontal Port Scan correlation search is triggered when a number of request destinations is superior of 10 in 15 minutes at the same port. | tstats `security_content_summariesonly` count dc(All_Traffic.dest) as unique_dest from datamodel=Network_Traffic by All_Traffic.src, All_Traffic.dest_port | `drop_dm_object_name("All_Traffic")` | where unique_dest>10 The Time Range of the correlation search looks like below: Earliset Time | 15m Latest Time | now Cron Schedule | */15 * * * * However, it looks like we have an issue: With the current configuration, the correlation search will not detect the port scan as in the image above. In this condition 18 destination will be "split" into two cron execution. Do you know how the correlation search can be modified in order to detect this type of condition? Thanks.
I have installed a forwarder on my apache serer and I see traffic (logs) moving from the web server to the indexers. When I run the command below on my search heads (plus ITSI), I get nothing. | e... See more...
I have installed a forwarder on my apache serer and I see traffic (logs) moving from the web server to the indexers. When I run the command below on my search heads (plus ITSI), I get nothing. | eventcount summarize=false index=* index=_* | dedup index | fields index my input.conf: [monitor:///web/JBossWeb/jws-3.0/https/logs/access.log.$(date +%Y.%m.%d)] sourcetype=apache_access disabled = 0 index = apache [monitor:///web/JBossWeb/jws-3.0/https/logs/error.log.$(date +%Y.%m.%d)] sourcetype=apache_error disabled = 0 index = apache Please help. Thank you.
Hello, The question is about DB-Connect, actually I’m already connected to a DB-Connect (Version 3.1.4) ) but when I’am fetching on that SQL DB using SQL query on SPLUNK , this search displays onl... See more...
Hello, The question is about DB-Connect, actually I’m already connected to a DB-Connect (Version 3.1.4) ) but when I’am fetching on that SQL DB using SQL query on SPLUNK , this search displays only results 100000 rows ( the limit for DB Connect to display results ) .But, my search normally displays more than 100000 rows. Cause, I need that result to make a mathematical operation ! How can I obtain all the rows of my DB Table !! The table on this DB is composed by data. Thanks in advance and cooperation
Hello All, My goal is to stop users from sharing credentials (local authentication) which is a potentially useless endeavour, but I see two possible courses of action: ACTIVE: Set a limitation ... See more...
Hello All, My goal is to stop users from sharing credentials (local authentication) which is a potentially useless endeavour, but I see two possible courses of action: ACTIVE: Set a limitation through a .conf (authentication.conf?) file that doesn't permit multiple sessions for a given local account PASSIVE: Run a search to identify whether an account has more than 1 active session with successive witch hunt Are either of these possible? Thanks! Andrew
Hi I was trying to connect to my oracle db using TCPS. but getting below error IO Error: Received fatal alert: handshake_failure, connect lapse 68 ms., Authentication lapse 0 ms. steps i di... See more...
Hi I was trying to connect to my oracle db using TCPS. but getting below error IO Error: Received fatal alert: handshake_failure, connect lapse 68 ms., Authentication lapse 0 ms. steps i did. in connection, enabled SSl checkbox 1. in java 1.8/ cacerts - imported the corporate root certificates.[ this works for sqlDeveloper and connecting from java] 2. since it didnt worked adde the root certificates to truststore and keystore in /etc/apps/splunk_app_db_connect/certs this also didnt worked. 3. added ojdbc8.jar into driver folder But still it is giving me error, other TCP connections are working fine. Kindly help
I have just been through the partner portal registration and for some reason my email address has automatically self populated with my personal email address. It is now asking for company ID etc, I... See more...
I have just been through the partner portal registration and for some reason my email address has automatically self populated with my personal email address. It is now asking for company ID etc, I just need to be connected to my Company registration, so I can complete training modules. Can someone let me know how I can cancel the account registration and start from the beginning. Many thanks, Chris.
I have a list of 10 sourcetypes and a list of 14 ips . If a particular ip stops sending data for any sourcetype in last 6 hours i should be alerted. How to set it. I tried metadata sourcetype but ... See more...
I have a list of 10 sourcetypes and a list of 14 ips . If a particular ip stops sending data for any sourcetype in last 6 hours i should be alerted. How to set it. I tried metadata sourcetype but that gives only missing sourcetypes. If i use only metadata host i get only missing hosts but how to get a combination of missing host and sourcetype.
Greetings!! what are the difference between support license and splunk enterprise license? and other license? am still confused about this ,kindly help me to know how is differ? is there anyhow... See more...
Greetings!! what are the difference between support license and splunk enterprise license? and other license? am still confused about this ,kindly help me to know how is differ? is there anyhow to check all the diiference license if it expired? Thank you in advance!
Can i know which reports are not used on my splunk search head, this i need so that i can remove un-used reports from the SH. would like if i can also know when was the last time a report was used ... See more...
Can i know which reports are not used on my splunk search head, this i need so that i can remove un-used reports from the SH. would like if i can also know when was the last time a report was used and no of hits... if possible. is there a way to know this..
Related to this question: https://answers.splunk.com/answers/807988/splunk-search-show-results-from-json.html I basically got the search working when I search field "yyy" and it's corresponding v... See more...
Related to this question: https://answers.splunk.com/answers/807988/splunk-search-show-results-from-json.html I basically got the search working when I search field "yyy" and it's corresponding value "yy-564" from JSON. That was solved and Splunk finds the correct event. But now my clients are complaining when they are searching for all events (field="" value=""), they see duplicate events due to that mvexpand command. They think it's confusing to see several events generated from one. Is there any solutions for preventing mvexpand not to show "dublicate" events on table? Thanks -Pete
We have a use case, when user mistakenly in first attempt wrong user name, in 2nd attempt again put the wrong username, in third attempt he user the right credentials and proceed for OTP. attaching... See more...
We have a use case, when user mistakenly in first attempt wrong user name, in 2nd attempt again put the wrong username, in third attempt he user the right credentials and proceed for OTP. attaching snapshot for better understanding. I want to collect the username following by "Following rule 'Successful' from item 'OTP Verify' to terminalout 'Successful'" message.
Hi Experts, I try to automate db_inputs.conf files entries. For this I try to copy DB_inputs.conf file of one env to another env and update its two fields of each stanza (input). That two fields... See more...
Hi Experts, I try to automate db_inputs.conf files entries. For this I try to copy DB_inputs.conf file of one env to another env and update its two fields of each stanza (input). That two fields are: connection = NewConnection host = new:host:IP But the new db_inputs.conf are not visible from Splunk UI. Please suggest what I need to correct ? Do I need to make any entry also in /metadata/ local.meta ?
I'm looking for a way to present just live sessions for VPN connections (Juniper SSL VPN). From the actual logs I can't see anything about the "session state", all I have is just the indicators if... See more...
I'm looking for a way to present just live sessions for VPN connections (Juniper SSL VPN). From the actual logs I can't see anything about the "session state", all I have is just the indicators if a session is opened or closed. Session open log: Mar 16 19:35:51 x.x.x.x 233 <134>1 2020-03-16T19:35:51+02:00 x.x.x.x PulseSecure: - - - 2020-03-16 19:35:51 - VPN_NAME- [x.x.x.x] username(user_role)[Junos_Users_Role, RDP_Role, WEB_Provision] - Connected to computer_name port 3389 session close log: Mar 16 21:37:32 x.x.x.x 288 <134>1 2020-03-16T21:37:32+02:00 x.x.x.x PulseSecure: - - - 2020-03-16 21:37:32 - VPN_NAME- [127.0.0.1] username()[] - Closed connection to computer_name port 3389 after 7301 seconds, with 25908389 bytes read (in 33955 chunks) and 3445084 bytes written (in 59766 chunks) Can anyone help me to determine active \ live sessions? Thanks in advance!
My environment is one Search Head -> one Heavy Forwerder -> 3 Indexers with Indexer Cluster. Search Head become slow on Web UI after can not connect the Heavy Forwarder or Indexers. I tried 2... See more...
My environment is one Search Head -> one Heavy Forwerder -> 3 Indexers with Indexer Cluster. Search Head become slow on Web UI after can not connect the Heavy Forwarder or Indexers. I tried 2 scenarios, (1) Search Head -> Heavy Forwarder -> Indexers (via SSL) When I stop Heavy Forwarder for maintenance, the Search Head Web UI become very slow even hard to operate on Web UI and TailReader-0 become red until the Heavy Forwarder start. (2) Search Head (directly to) -> Indexers (via SSL) The same result with scenarios (1). Why Splunk Search Head crashed after can not connect Heavy Forwarder or Indexer ? When queue full just can not input data anymore, right ? What relate with splunkweb ? 英語だけではなく、 よろしければ、日本語で返事していただければ幸いです。 どうぞよろしくお願いいたします。
Hi, I am working on a splunk query to pull the records from daily basis depends on timinging. For example 30m and 60m, for those I have confirmed the _time and relative time conditions to pull ... See more...
Hi, I am working on a splunk query to pull the records from daily basis depends on timinging. For example 30m and 60m, for those I have confirmed the _time and relative time conditions to pull the transactions between this timeframe. Now we have a requirement to pull the records as below, 30 mins <60 mins on Sat&Sun> for rest of the day How could I achieve this requirment from splunk. Please suggest. Thanks Venkatesh.
eventtype="*" "screen" OR "ui1" | stats count AS TotalEvents by product | appendcols [search eventtype="*" "ui2" OR "ui3" | stats count AS subsetEvents by product] | eval percentage = 100 * subset... See more...
eventtype="*" "screen" OR "ui1" | stats count AS TotalEvents by product | appendcols [search eventtype="*" "ui2" OR "ui3" | stats count AS subsetEvents by product] | eval percentage = 100 * subsetEvents / TotalEvents | where percentage > 1 The performance of this query is slow. I want to calculate percentages based off of subsetEvents and totalevents. TotalEvents is retrieved from "screen" or "ui" events. subsetEvents is retrieved from "screen1" or "ui1". Any help is highly appreciated.