All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm having some issues sending specific events to nullQueue . I want all events from a specific source with the event_type=SETXATTR sent to nullqueue . I have this in my props and transforms file... See more...
I'm having some issues sending specific events to nullQueue . I want all events from a specific source with the event_type=SETXATTR sent to nullqueue . I have this in my props and transforms files that is currently not working: Props.conf [source::/syslog-ng/nasuni/*/*.log] TRANSFORMS-null= setnull Transforms.conf [setnull] REGEX = (?<event_type>SETXATTR) DEST_KEY = queue FORMAT = nullQueue Also, where exactly on the indexers should these be? I've read some say to put in the $SPLUNK_HOME/etc/system/local folder and others say to put in the $SPLUNK_HOME/etc/apps/myapp/local folder. Thanks!
Can I install Linux indexers in an environment that is all Windows? The search heads are clustered, but the indexers are not.
Hi, I must be missing something. I have a simple search using a time modifier: index=MyIndex earliest=-30m My expectation is when I search, the results will be searched and returned onl... See more...
Hi, I must be missing something. I have a simple search using a time modifier: index=MyIndex earliest=-30m My expectation is when I search, the results will be searched and returned only over the last 30 minutes of events. When I use the time picker and set to ALL TIME, it still only returns the last 30 minutes but searches over ALL EVENTS? Is this the correct behavior? I looked at the job log and it did have the earliest event as the first event in the index (ALL TIME) with the time modifier. Read this a few times, https://docs.splunk.com/Documentation/Splunk/8.0.3/Search/Specifytimemodifiersinyoursearch and didn't see any verbiage to this behavior. Thank you! Chris
Hi All , I am facing one issue for indexing. I have .csv file from external resource and this .csv file size is 11236KB. also configured data (access log) in data input. want to generate repor... See more...
Hi All , I am facing one issue for indexing. I have .csv file from external resource and this .csv file size is 11236KB. also configured data (access log) in data input. want to generate report for AD Group details. In .csv file and data(accesslog) , one field (user_id) is common so when we trying to generate report so .csv file is taking more time indexing and getting error fail to reopen lookup (.csv ) file. Can you please help me on this ?
Version 7.3.1 - Splunk Enterprise - Licensed I receive a "500 Internal Server Error" when I try to navigate to Settings > Data Inputs. I've searched posted Answers with version 6.3. "splunk_httpi... See more...
Version 7.3.1 - Splunk Enterprise - Licensed I receive a "500 Internal Server Error" when I try to navigate to Settings > Data Inputs. I've searched posted Answers with version 6.3. "splunk_httpinput app" is enabled; the other was to update to version 6.4.3. Neither provide resolution to the problem. Didn't have any luck finding alternate methods to modify data inputs. Thanks for the help.
Trying to extract the actual query sourcetype=extendedevent EventClass=QUERY_END | rex "TextData=(?P.*);NTCanonicalUserName" | rex field=Query "FROM [(?\w+\W?\w+)]" | bin _time span=1d | eval... See more...
Trying to extract the actual query sourcetype=extendedevent EventClass=QUERY_END | rex "TextData=(?P.*);NTCanonicalUserName" | rex field=Query "FROM [(?\w+\W?\w+)]" | bin _time span=1d | eval mytime=strftime(_time,"%m/%d/%Y") |eval DatabaseName = DatabaseName+":"+CubeName | stats dc(NTUserName) by mytime , DatabaseName The data is look like below [2020-05-28 16:01:47.868 +00:00] CurrentTime=5/28/2020 4:01:47 PM +00:00;StartTime=5/28/2020 4:01:47 PM +00:00;EndTime=5/28/2020 4:01:47 PM +00:00;EventClass=QUERY_END;EventSubclass=1;Severity=0;Success=1;Error=0;ConnectionID=2804894;ClientProcessID=4364;SPID=12255472;ErrorType=0;Duration=78;CPUTime=78;IntegerData=5;TextData=select [LAST_SCHEMA_UPDATE],[LAST_DATA_UPDATE] from $system.mdschema_cubes where ([CATALOG_NAME]=@p1);NTCanonicalUserName=xxxx\xxx;SessionID=F1E0DF9C-E2B2-48BD-BFF4-FB57D3868BC6;NTUserName=xxxxx;NTDomainName=xxxxx;DatabaseName=xxxxx;ApplicationName=xxxxx05/28/2020 00:31:26;ServerName=xxxxx;RequestID=c65c0c7e-97d8-4259-a0aa-eab745e72b44;RequestID=xxxxx-a430-418f-898a-37282d0ee2df[0];RequestID=xxxxx-d7ed-4401-9856-c974c21017c2[24];``` I did search on https://regex101.com/r/ObGKC9/3. and it is showing 917 steps. Need help to make it less.
Hi all, I'm quite new so pardon my bad exposition, I'll try my best to explain what i'm trying to achieve. Can two fields, one from the event itself (ilo.uri), the other from an automatic lookup ... See more...
Hi all, I'm quite new so pardon my bad exposition, I'll try my best to explain what i'm trying to achieve. Can two fields, one from the event itself (ilo.uri), the other from an automatic lookup (il_host) be compared in a "like" fashion (ilo.uri = *il_host*) inside of a tstats search WHERE clause? Here's my current search: | tstats values(ilo.level) AS "level" FROM datamodel="b2b.ilo" WHERE ilOutbound.level IN ("error", "warn") ### AND ilo.uri = *il_host* ### BY _time, ilo.level | timechart span=1h count(level) AS "level" BY ilo.level
Hi i have a column _time getting displayed in the results due to timechart used in the query. Its currently getting displayed in the form of 03-2020 but i want to show it like March or Mar. Is there ... See more...
Hi i have a column _time getting displayed in the results due to timechart used in the query. Its currently getting displayed in the form of 03-2020 but i want to show it like March or Mar. Is there a way to do that?
How to customize tooltip in Custom Tooltip information on mouseover? Will I be able to get raw data on mouseover and use it for secondary query to load information for tool tip?
Hi, i am new to splunk so i am having a little bit of problem understanding the timestamp concept. So with the data that i am working the _time column has taken the date and time from event data. But... See more...
Hi, i am new to splunk so i am having a little bit of problem understanding the timestamp concept. So with the data that i am working the _time column has taken the date and time from event data. But i want the _time column to display date and time when i uploaded that data in splunk for indexing. So basically if this is the format: (Time ) Events (11/13/2019.2:30:20 ) 11-13-2019 14:30:20 So this is how the time is being taken from the event but i want it to change to the date 24th april, 2020 (04/24/2020) on which i uploaded the data . How can i do this?? Plz help me.
Hello, We've followed the instructions for version 2.0.2 of splunk_ta_o365, running on a Splunk 8.0.2 HF. But in the logs we're getting: 2020-05-28 07:45:25,328 level=ERROR pid=2827626 tid=Mai... See more...
Hello, We've followed the instructions for version 2.0.2 of splunk_ta_o365, running on a Splunk 8.0.2 HF. But in the logs we're getting: 2020-05-28 07:45:25,328 level=ERROR pid=2827626 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | datainput=b'O365_Management_Activity' start_time=1590677124 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 102, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/batch.py", line 47, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 127, in discover subscription.start(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 160, in start response = self.perform(session, 'POST', '/subscriptions/start', params) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 169, in _perform return self._request(session, method, url, kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 181, in _request raise O365PortalError(response) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 26, in __init_ message = str(response.status_code) + ':' + payload TypeError: can only concatenate str (not "bytes") to str We've checked and rechecked our Azure permissions, which look like: Hoping someone has a lead on what's going wrong. At least one poster with a similar error said that "Grant Permissions" was missed on the Azure side, but as far as we can tell it looks OK there. Thanks
Hello, I have an issue with this type of log : [5/22/20 14:46:23:381 GMT] 0000009c ThreadMonitor 3 UsageInfo[ThreadPool:hung/active/size/max]={server.startup:0/0/1/3,ProcessDiscovery:0/0/1/2,TCPC... See more...
Hello, I have an issue with this type of log : [5/22/20 14:46:23:381 GMT] 0000009c ThreadMonitor 3 UsageInfo[ThreadPool:hung/active/size/max]={server.startup:0/0/1/3,ProcessDiscovery:0/0/1/2,TCPChannel.DCS:0/2/4/20,HAManager.thread.pool:0/0/2/2,Default:0/2/6/20} I create a regex which works : rex field=_raw "\[(?[^\[]*)\]\s(?[^\s]*)\s(?[^\s]*)\s(?[^\s]*)\s(?.{11})(?\[\w.*\])(?[\=])\{((?\w.*?):(?\d+)\/(?\d+)\/(?\d+)\/(?\d+))+" | table timestamp threadname hung max But the threadname is always the first match, in my case server.startup. Is it possible to add a where clause to extract the desired threadname, for example HAManager ? And I can't modify props.conf because I don't have admin right. Thanks for your help David
Hi, I have heard a lot about integrating Splunk with Jira but what are the key advantages of it? I mean what would be the major thing we can achieve by doing so? Splunk indexing is expensive so I ... See more...
Hi, I have heard a lot about integrating Splunk with Jira but what are the key advantages of it? I mean what would be the major thing we can achieve by doing so? Splunk indexing is expensive so I need some solid reasons to convince the management. If someone can share some light, that would be very helpful
Hey guys, My Splunk client gave a couple of license usage warnings, and I think I fixed what was causing it. (It was indexing way too much by wrong search string for a week). Is there any way to c... See more...
Hey guys, My Splunk client gave a couple of license usage warnings, and I think I fixed what was causing it. (It was indexing way too much by wrong search string for a week). Is there any way to calculate how much all of my searches are using now? The 'License Usage Reporting' tab only shows how much I've used in the past, but it would be nice to know if I've solved it now.
i would like to create a report for vmware hosts with storage stats like storageCapacity, Storage_free and Storage used. Can someone help me with correct source and sourcetype details here?
Hi All, My Production License Master link directly it's open with out credentials and it's not showing the Account setting option , how to get the account settings? how to change my password? Reg... See more...
Hi All, My Production License Master link directly it's open with out credentials and it's not showing the Account setting option , how to get the account settings? how to change my password? Regards, Vijay .K
Please let us know - if splunk supports getting DNSTrap logging
Event is not getting logged into the Splunk. I checked the splunkd.log file and found below warning - WARN ExecProcessor - Streaming XML data: Expected tag "event", instead received "bound". ... See more...
Event is not getting logged into the Splunk. I checked the splunkd.log file and found below warning - WARN ExecProcessor - Streaming XML data: Expected tag "event", instead received "bound". Is anyone familiar with this issue? Any help would be appreciated. Thanks in advance!
Hi experts, Search 1: base search from JSON... | eval col1=strptime(taken_date,"%b %d %Y %H:%M:%S") | stats latest(col1) as max_col The above search returns a single value. Based ... See more...
Hi experts, Search 1: base search from JSON... | eval col1=strptime(taken_date,"%b %d %Y %H:%M:%S") | stats latest(col1) as max_col The above search returns a single value. Based on the single value max_col , i would like to run the below search which displays the values of col1 only when col1 > (max_col-2629743) . Search 2: base search from JSON... | eval col1=strptime(taken_date,"%b %d %Y %H:%M:%S") | eval max_col1_30= max_col-2629743 | where col1 > max_col1_30 | table col1 max_col1_30 Could you please help by joining both the searches? Sub search is not working for me. Thank you.
event status : False positive (25 may) False positive (24 may) Investigating (23 may) Investigating (22 may) Service degradation (21 may) Hear status changed Service degradation to investigatin... See more...
event status : False positive (25 may) False positive (24 may) Investigating (23 may) Investigating (22 may) Service degradation (21 may) Hear status changed Service degradation to investigating then alert wants to raised then status changed investigating to investigating then alert is not raised. then status changed from investigating to false positive then alert wants raised. dash board query: index="mail_activity" sourcetype="service:message" DisplayName="Exchange Online" | eval myTimeNewEpoch=strptime(UpdatedTime,"%Y-%m-%dT%H:%M:%S") | eval UpdatedTime=strftime(myTimeNewEpoch,"%Y-%m-%d %H:%M:%S") | table LastUpdatedTime DisplayName Status Description | rename UpdatedTime as Time DisplayName as Application | sort -Time please help me with the query to create the alert Thanks in advance