All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Will any of the knowledge objects or dashboards be affected once the add-on is applied when moving from DUO to Cisco?
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" ... See more...
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" | eval Date=strftime(_time, "%Y-%m-%d"), Time=strftime(_time, "%H:%M:%S") | rename user as User dstip as "Destination IP" dlp_file as File url as URL | table Date Time User URL File "Destination IP" User "Label Name"   I am tracking social security numbers and how many times one leaves the firm. I even mapped the specific dlp_rule values found to values like C1, C2, C3... When I had added this query, I had to update the other panels accordingly to track the total number of SSN leaving firm through various methods. On all of them, I had the above filter: index=testing sourcetype="netskope:application" dlp_rule="AB C*" And I am pretty sure I had results. Pretty much, for the dlp_rule value, I had strings like AB C*, and I had 5 distinct values I was mapping against.  Looking at the dataset now, a few months later, I dont see any values matching the above criteria, AB C*. I have 4 values, and the dlp_rule that has a null value appears over 38 million times. I think the null value is supposed to be the AB C*. I dont have any screen shots proving this though.  My question is, after discussing this with the client, what could have happened? When searching for all time, the above SS is what I get. If I understand how splunk works even vaguely, I dont believe Splunk has the power to go in and edit old ingested logs, in this case, go through and remove a specific value from all old logs of a specific data source. That doesnt make any logical sense. Both the client and I remember seeing the values specific above. They are going to contact netskope to see what happened, but as far as i know, I have not changed anything that is related to this data source.  Can old data change in Splunk? Can a new props.conf or transforms apply to old data?    Thank you for any guidance. 
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if... See more...
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if that field is cpu is in the table for instance, then we have to run a calculation with the Cpu field. for all the events who have cpu.  fields customer selects are number fields. The things i have tried are not returning the value in the cpu field.  Without discussing customer stuff, using calculated fields won't work, KPI stuff won't work. For what they want, I need to do it this way.
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the ap... See more...
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the apps section of Splunk-SOAR dashboard.  Can anyone help with this please Error Message app.json    
Trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  - 
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now ... See more...
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now I can't even access the Splunk website because this is what I see:     I'm from Brazil, if that helps in any way. So, what should I do? __________________________________________________________________   UPDATE:   Apparently this is a Chrome browser issue, as I was able to log in and out multiple times in Microsoft Edge without any problems! From there, I can start my free trial! So I guess the solution is to change browsers!  
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the stat... See more...
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the status health which is in Red Yellow green and not getting . Rest which am using is - | rest /services/server/health on MC am able to see health status of  MC ,CM, LM,DS, Deployer and IDX but not for forwarders also while am running the same query opening any of the HF U.I am able to see there health results.
  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both S... See more...
  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both SSO (which is used now) and token-based authentication. A quick query to ChatGPT shows that this may be possible, but I'd like to have it confirmed. Could anyone confirm using/administering such a deployment?   B.r.   Lukas  
Hello Experts, In Splunk ITSI, we’re able to see the alerts in the Alerts table, but those alerts are not being reflected on the Glass Tables. Has anyone experienced this issue or can suggest what m... See more...
Hello Experts, In Splunk ITSI, we’re able to see the alerts in the Alerts table, but those alerts are not being reflected on the Glass Tables. Has anyone experienced this issue or can suggest what might be causing it? @Ann_Treesa  Regards,  Manideep Anchoori manideep.anchoori@erasmith.com
Hi  I have a following architecture and i am trying to setup my Search head cluster . i have multiple questions ,  if i want to have 1 copy of search artifact in each SH what should be my repl... See more...
Hi  I have a following architecture and i am trying to setup my Search head cluster . i have multiple questions ,  if i want to have 1 copy of search artifact in each SH what should be my replication factor here in this command  splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label> Second question is how will i set up the captain in this cluster  If i run this command on the SH1 will it become captain  opt/splunk/bin/splunk bootstrap shcluster-captain -servers_list "https://splunk-essh01.abc.local:8089,https://splunk-essh02.abc.local:8089,https://splunk-essh03.abc.local:8089" Last question is if i want to connect this SHC to the indexer cluster , will this command work  splunk edit cluster-config -mode searchhead -site site0 -manager_uri https:// LB-IP-OR-DNS-HOSTNAME:8089   -replication_port 9887 -secret "<redacted>"    
Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. C... See more...
Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. Can someone assist me on this on what should be props.conf and transforms.conf. i tried to use _json sourcetype but it producing none for the timestamp field. Note: I'm trying to test this locally. ``` {"level":"warn","service":"resource-sweeper","timestamp":1744302465965,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302475969,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302858869,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304731808,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304774636,"message":"1 nodes are not allocated"} ```  
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was throw... See more...
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was thrown while dispatching the python script handler. .   I've been searching the python logs and it seems to be related to OpenSSL: grep -B 5 -A 5 "mltk" .../var/log/splunk/python.log -> ERROR You are linking against OpenSSL 1.0.2, which is no longer supported by the OpenSSL project. To use this version of cryptography you need to upgrade to a newer version of OpenSSL. For this version only you can also set the environment variable CRYPTOGRAPHY_ALLOW_OPENSSL_102 to allow OpenSSL 1.0.2. As the error suggests, I tried to set a variable via command line, as well as through /splunk/etc/splunk-launch.conf but without success. Has anyone had this error before and knows how to solve?  
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\... See more...
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\s+\d+\s+\d([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::B_syslog WRITE_META = true I want to apply timestamp for B_syslog differently here, so I'm looking for sourcetype in props.conf but I can't see it. When I change the sourcetype in the same way as above, can I get a different timestamp value only for that data?
In some essential app security aws rules, it requires you to populate the aws_service_accounts lookup to use in exceptions, but I'm having trouble with how I can map all my aws service accounts. b... See more...
In some essential app security aws rules, it requires you to populate the aws_service_accounts lookup to use in exceptions, but I'm having trouble with how I can map all my aws service accounts. by example: https://research.splunk.com/deprecated/4d46e8bd-4072-48e4-92db-0325889ef894/ in implementation section
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data comi... See more...
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data coming in drastically changed.  Might be more data coming from Checkpoint Harmony server compared to previously.  I'm trying to create custom field extractions on this data but it keeps crashing the wizard.  Just curious if anyone has any suggestions?  Thanks!
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly... See more...
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly used and get rid of them. (or if you have any additional advise on cleaning out a large index) is there a query for this?
Upon installing the Akamai SIEM I am not seeing the data input option for "Akamai​ Security Incident Event Manager API", please advise?  Java is installed and running Splunk 9.3.3
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as ... See more...
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as follows, and the data is stored by dividing it into different source types and indexes. [A_syslog] TRANSFORMS-<class_A> = a, b, c, d TRANSFORMS-<class_B> = e, f, g Here, I want to add additional data to be filtered by b, but these data are different from the data currently being collected and timestamp REGEX, so I think I need to collect them in a different way. Is there a way to specify a different timestamp value only for the data being added while the data collection is continuing?
Hi there, we're currently migrating to ES 8 and need to see Work Notes (comments) provided by analysts in some dashboards/reports. Previously, the incident_updates_lookup contained the "comment" fie... See more...
Hi there, we're currently migrating to ES 8 and need to see Work Notes (comments) provided by analysts in some dashboards/reports. Previously, the incident_updates_lookup contained the "comment" field, which held this information, and was easy to access in a search. With ES 8, this was obviously mentioned as a limitation - "The Comments feature available in prior versions of Splunk Enterprise Security is now replaced by an enhanced capability to add notes." How can we access those notes (KV Store/Lookup/...) outside of having to click through the Mission Control/Analyst Queue manually? Where are they stored?
As we have recently enabled various audit settings on our domain, we now have 4662 events being generated on the DCs. I am now trying to reduce the number of 4662 events being forwarded to our Splun... See more...
As we have recently enabled various audit settings on our domain, we now have 4662 events being generated on the DCs. I am now trying to reduce the number of 4662 events being forwarded to our Splunk backend on the "front end" by tuning the inputs.conf on the DCs. The desired situation is that only events that contain one of the GUIDs that indicate a potential DCSync attack are being forwarded to Splunk: "Replicating Directory Changes all", "1131f6ad-9c07-11d1-f79f-00c04fc2dcd2" , "1131f6ac-9c07-11d1-f79f-00c04fc2dcd2"or "9923a32a+-3607-11d2-b9be-0000f87a36b2". (from https://www.praetorian.com/blog/active-directory-visualization-for-blue-teams-and-threat-hunters/) So dropping all 4662 events, except if they match any of these GUIDs. I've been playing with the existing blacklist line for events 4662 to fulfil this purpose, but can't seem to get it to work. Not even for one of these GUIDs like for example the below: blacklist1 = EventCode="4662" Message="Properties:\sControl\sAccess\s^(?!.*{1131f6ac-9c07-11d1-f79f-00c04fc2dcd2})" Obviously I've restarted the Splunk forwarder after every tweak. Anybody that can help with compiling a proper blacklist entry?