All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Splunker, I am trying to implement end to end monitoring where searches have dependecy on multiple lookups and those lookups are dervied from different searches running internally.The idea is... See more...
Dear Splunker, I am trying to implement end to end monitoring where searches have dependecy on multiple lookups and those lookups are dervied from different searches running internally.The idea is to diagnose the situation thouroughly and as early as possible without running all the searches/alerts all the time because there are atleast 60/70 Odd searches in production. Please look at the below example for more clarity. search 1 |inputlookup Lookup1 |rename sys_id AS app_sys_id |lookup Lookup2 parent AS app_sys_id OUTPUTNEW child AS server_sys_id |mvexpand server_sys_id |join server_sys_id [inputlookup Lookup3 | rename sys_id AS server_sys_id] |fields host, server_fqdn, server_status, server_support_group, server_type, server_sox, server_sas, server_admin, server_location, server_environment |outputlookup Lookup1 Also, as we can see here Lookup1 is dependent on 2 lookups internally (Lookup2,Lookup3) Search 2 sourcetype=someother_source2|outputlookup Lookup2 here, this sourcetype will eventually create lookup2 Search 3 sourcetype=someother_source3|outputlookup Lookup3 here, this sourcetype will eventually create lookup3 Basically, It's just an example in production I have more than 30 searches in such a manner,I do not want to create 30 alert and running all unnecessary at 30 Odd timings. I was thinking if i can create something where I will create an alert which will check inside search 1, if the result count is zero or not,if it is zero I should be alerted and then only it should also internally run a report (search 2) to check there if a data is missing or not. If not then move to search 3 to check further, if something is missing then we need to be alerted that data is missing with search 2 or search 3 .It will help me in diagnosing the situation early and also not all the time unnecessary alerts are running on my search head. Any ideas would be appreciated, I am okay using alerts,report,dashboards,scripts etc. Again, thanks in advance.
Hello, I have clonned the CSV source type in Splunk and created a new CSV sourcetype as Alpha_csv and configured to monitor the csv files. I have created a deployment apps and configured input... See more...
Hello, I have clonned the CSV source type in Splunk and created a new CSV sourcetype as Alpha_csv and configured to monitor the csv files. I have created a deployment apps and configured inputs.conf in deployment app. All my csv files are not getting monitored but no as per header values. (no csv headers are getting listed) csv sourcetype is set to auto, comma separated csv. Can anyone help on this? Thanks
I have two fields called field1, field2. Both are having same value as "xyz" but when i try to compare them with match function, it is saying that both values are different. Output of below query is ... See more...
I have two fields called field1, field2. Both are having same value as "xyz" but when i try to compare them with match function, it is saying that both values are different. Output of below query is "No" but I am expecting the answer "Yes" as both are equal. eval results1=if(field1=field2,"Yes","No") If any field format make differ here ?
Hi, I am novice to splunk and trying to learn explore things in it. Currently I am stuck with one problem while extracting the values from XML. Here is the sample xml format from splunk message... See more...
Hi, I am novice to splunk and trying to learn explore things in it. Currently I am stuck with one problem while extracting the values from XML. Here is the sample xml format from splunk message 2020:04:15:02:03:54.643 INFO xyz NOFMT 123456778 tomcat-http--105 10123456778 logged message: <test> <dept id="0" value="Facilities"/> <dept id="1" value="Support"/> <dept id="2" value="IT" group="test"/></test> I need to extract the xml message and trying to compare the values based on the dept ID. for eg: where id=0 and LIKE(value,"%facilities%") OR id=1 and LIKE(value,"%IT%") Need to filter the messages based on above condition. While, I am trying to extract the ID and values but results are getting cluttered and not showing the desired results. Please help me in this regard.
the search is like this: host=linux01 sourcetype="linux:audit" key="linux01_change" NOT comm IN (vi) how can I create a timechart to show the number of total events (host=linux01 sourcetype="linu... See more...
the search is like this: host=linux01 sourcetype="linux:audit" key="linux01_change" NOT comm IN (vi) how can I create a timechart to show the number of total events (host=linux01 sourcetype="linux:audit") and the number of filtered events (host=linux01 sourcetype="linux:audit" key="linux01_change" NOT comm IN (vi))? thanks for any help!
I like to extract the UNC path from a log, below a portion from the file : ;<soa:FileSystem identifier="8ec65285-11ac-45a5-9652-425b7494b0df" name="Windows" description="Windows File System" leftal... See more...
I like to extract the UNC path from a log, below a portion from the file : ;<soa:FileSystem identifier="8ec65285-11ac-45a5-9652-425b7494b0df" name="Windows" description="Windows File System" leftaligncheckboxes="false" instance="102711ce-e483-46bc-bf6c-f42ae6faf234" signature="00000000-0000-0000-0000-000000000000" scheme="file" opencapable="true" consumeopen="true" emitopen="true"><soa:Location>file://fileserver/folder/folder1/folder2/</soa:Location I like to have this part: //fileserver/folder/folder1/folder2 I have tried different rex combination without luck, any suggestions. Thank you
Hello, I am going to upgrade from Splunk Enterprise version 7.0.3 to version 7.3 (standalone). I am asking before I impact my productive customer environment. After the upgrade is complete, will ... See more...
Hello, I am going to upgrade from Splunk Enterprise version 7.0.3 to version 7.3 (standalone). I am asking before I impact my productive customer environment. After the upgrade is complete, will my license remain on the server or will it be necessary to reinstall it? If it is necessary, how can I get my license now so that I can upload it when the upgrade is done? Thanks in advance.
22:13:06.901Z INFO my-portal: blah : blah - success tracker: { "trackId": "foo", "hashedAccountId": "bar", "ip": "127.0.0.1", "queryUrl": "http://my.domain.com/aluminum/... See more...
22:13:06.901Z INFO my-portal: blah : blah - success tracker: { "trackId": "foo", "hashedAccountId": "bar", "ip": "127.0.0.1", "queryUrl": "http://my.domain.com/aluminum/batPreferences/txm", "queryMethod": "GET", "elapsed": 91.561 } The nodejs output looks kinda like what's shown above. Any suggestions for parsing this so that I can view the syntax highlighted json would be appreciated. I've tried a transforms to reassign the INFO line to a separate sourcetype, but that doesn't change the fact that I only see the raw text in my search. [nodejs:all] KV_MODE = json NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TRUNCATE = 0 LINE_BREAKER = ([\r\n]+)(\d{2}:\d{2}:\d{2}) TRANSFORMS-strip_INFO = strip_INFO transforms.conf [strip_INFO] REGEX = ^\d{2}:\d{2}:\d{2}.\d{3}Z FORMAT = sourcetype::nodejs_out DEST_KEY = MetaData:Sourcetype Clearly this doesn't work, but I'm a bit stumped. Thank you.
Hi, How can I pull a list or report of alerts which is having any of these specific words? "purchase" OR "search" OR "booking"
my search |eval _time= strftime(_time, "%Y-%m-%d")|stats latest(AssetRiskScore) as score by _time AssetNames | sort 0 - _time | dedup 2 AssetNames | reverse | streamstats current=f last(sc... See more...
my search |eval _time= strftime(_time, "%Y-%m-%d")|stats latest(AssetRiskScore) as score by _time AssetNames | sort 0 - _time | dedup 2 AssetNames | reverse | streamstats current=f last(score) AS prev_score BY AssetNames | eval change = score - prev_score
I have a new field called new_time, and I need to add a time picker on the dashboard but have it use the new_time values. Can someone please suggest a solution? | eval new_time = strptime(old_sec,... See more...
I have a new field called new_time, and I need to add a time picker on the dashboard but have it use the new_time values. Can someone please suggest a solution? | eval new_time = strptime(old_sec,"%m/%d/%Y %H:%M") | fieldformat new_time=strftime(new_time,"%m/%d %H:%M") |eval _time=new_time | xyseries _time, Name, Values
Hello All, I've noticed what I believe to be a bug between the splunk_ta_o365. I wanted to point it out or see if I am missing something here. I have many events in office o365 which looks simila... See more...
Hello All, I've noticed what I believe to be a bug between the splunk_ta_o365. I wanted to point it out or see if I am missing something here. I have many events in office o365 which looks similar to this (It is confusing, but this user did NOT login): Raw Logs: Workload=AzureActiveDirectory Operation=UserLoggedIn ResultStatus=Succeeded LogonError=UserAccountNotFound The "Splunk Add-on for Microsoft Office 365" the following lookup: "splunk_ta_o365_cim_authentication.csv" which compares the Workload, Operation, and ResultStatus fields - it does not consider LogonError. It OUTPUTs the following new fields: dataset_name=authentication action=success The splunk_ta_o365 includes: tag=authentication for "dataset_name=authentication" Now the Authentication.Successful_Authentication (looks for tag=authentication action="success") data model incorrectly tells you a user has logged in when in fact, they failed because there was "UserAccountNotFound" Using Splunk 8+, Splunk_SA_CIM 4.15.0, splunk_ta_o365 2.0.1
The Splunkbase page for the RWI app states: "There are no major considerations for installing Remote Work Insights - simply follow your usual Splunk App Deployment procedure." I find this to be ve... See more...
The Splunkbase page for the RWI app states: "There are no major considerations for installing Remote Work Insights - simply follow your usual Splunk App Deployment procedure." I find this to be very misleading because if you do not have fully populated data models this app will do literally nothing, even if you point the macros to the correct indexes. Please be transparent if you want people to install & use your app. Advice is welcomed, perhaps I'm missing a relevant workaround. https://splunkbase.splunk.com/app/4952/#/details
Do you need to install the add-on on the indexers? If the Heavy Forwarder has a tcp listener port for Palo logs then do you also need to install the TA add-on on the indexers? We were planning on... See more...
Do you need to install the add-on on the indexers? If the Heavy Forwarder has a tcp listener port for Palo logs then do you also need to install the TA add-on on the indexers? We were planning on installing the app on the Search Head and add-on on HF only. Will that work? What are the specific advantages installing the TA add-on and APP on the Search Head? Is that setup only for certain Palo products and services? Thank you!
Hi All, I am little bit of a novice with Splunk, but I am curious to find the distinct number of customers between 95 and 99th percentile of API response times. For a little it more clarity, I wa... See more...
Hi All, I am little bit of a novice with Splunk, but I am curious to find the distinct number of customers between 95 and 99th percentile of API response times. For a little it more clarity, I wan to say there are 10,000 customers who fall under 95th percentile and 15000 under 99th percentile. The result would be 5000. The API response times and custid are recorded in a server log as result, and ID respectively.
I'm seeing this error message stream when the TA-ObserveIT tries to get what I think is the configuration information. 2020-04-16 16:29:33,418 ERROR pid=28982 tid=MainThread file=base_modinput.py... See more...
I'm seeing this error message stream when the TA-ObserveIT tries to get what I think is the configuration information. 2020-04-16 16:29:33,418 ERROR pid=28982 tid=MainThread file=base_modinput.py:log_error:307 | Traceback (most recent call last): File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/modinput_wrapper/base_modinput.py", line 113, in stream_events self.parse_input_args(input_definition) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/modinput_wrapper/base_modinput.py", line 152, in parse_input_args self._parse_input_args_from_global_config(inputs) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/modinput_wrapper/base_modinput.py", line 171, in _parse_input_args_from_global_config ucc_inputs = global_config.inputs.load(input_type=self.input_type) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/splunktaucclib/global_config/configuration.py", line 270, in load input_item['entity'] File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/splunktaucclib/global_config/configuration.py", line 175, in _load_endpoint **query File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 287, in wrapper return request_fun(self, *args, **kwargs) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 69, in new_f val = f(*args, **kwargs) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 665, in get response = self.http.get(path, self._auth_headers, **query) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 1160, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/splunk/hot/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 1221, in request raise HTTPError(response) HTTPError: HTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [400]: Bad Request -- HTTP 400 Bad Request -- Invalid Events Pagination Value: TypeError(\"int() argument must be a string or a number, not 'NoneType'\",)\". See splunkd.log for more details."}]} I've been trying to figure out a way of printing to the app or splunkd log files the request so I can see if that makes sense. The "Invalid Events Pagination Value: TypeError" doesn't make sense to me since the inputs.conf has this in it events_pagination = 10000 I'm out of ideas. Any thoughts on how to resolve this are appreciated. TIA, Joe
Hi, So I a page with more than a few urls that represent that same page. However, one of these urls has a value that indicates it has a specific location that indicates promotion. I want to chart ... See more...
Hi, So I a page with more than a few urls that represent that same page. However, one of these urls has a value that indicates it has a specific location that indicates promotion. I want to chart on a line graph both the promoted url and all urls that are not the promoted url. I figure I have to use the eval if statement, but what I've done thus far is not working. page=/store/product_abcd1234* -> note this returns many variants of the same page |eval promostatus=if(page="/store/product_abcd1234?source=promoted, promo, nopromo) |chart distinctcount(user) by promostatus I want to count uniquely the number of users who had a referral from the promotion and the number of users who load the entire group of page values (except the promo). I'd like to chart this over time with both lines on the same time graph.
I'm using 4.5.5. I created an exit point to track calls to java.sql.Connection.isValid() but it doesn't work.  Since then I've been seeing this error in the agent logs: [Log files redacted] ... See more...
I'm using 4.5.5. I created an exit point to track calls to java.sql.Connection.isValid() but it doesn't work.  Since then I've been seeing this error in the agent logs: [Log files redacted] Any suggestions? thanks ^ Post edited by @Ryan.Paredez to remove log files. Please do not share or attach log files to community posts for security and privacy reasons.
I'm the ePO administrator at my company and we are trying to integrate McAfee with Splunk. McAfee told me that on the ePO side I just need to set the Splunk server as a Registered Server (as I've s... See more...
I'm the ePO administrator at my company and we are trying to integrate McAfee with Splunk. McAfee told me that on the ePO side I just need to set the Splunk server as a Registered Server (as I've set on this image below), but I'm not able to establish a connection with Splunk and forward Syslogs. What can I do to establish a connection with Splunk and send syslogs? Do we need to install the McAfee add-on and/or install DB Connect?
I am trying to add Threat Intelligence to my Splunk ES via the HISAC taxii discovery service I have set up the Intelligence Download with configs: TYPE: taxii URL: https://members.nhisac.org/taxi... See more...
I am trying to add Threat Intelligence to my Splunk ES via the HISAC taxii discovery service I have set up the Intelligence Download with configs: TYPE: taxii URL: https://members.nhisac.org/taxii-discovery-service POST ARGUMENT: collection="" earliest="-90d" taxii_username="" taxii_password="PASSWORD" In the Threat Intel Audit tab, the status is "TAXII feed polling starting" and has not changed. Does anyone know if this is the correct way to do this?