All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have raw message of the form... 2022-08-15T10:41:54.266337+00:00 microService 9bc7520a-4f8d-4edc-a4cd-b08c0fae8992[[APP/PROC/WEB/2]] APPENDER=APP, DATE=2022-08-15 10:41:54.266, LEVEL=WARN , USER=... See more...
I have raw message of the form... 2022-08-15T10:41:54.266337+00:00 microService 9bc7520a-4f8d-4edc-a4cd-b08c0fae8992[[APP/PROC/WEB/2]] APPENDER=APP, DATE=2022-08-15 10:41:54.266, LEVEL=WARN , USER=, THREAD=[pool-25-thread-1], LOGGER=Factory, CORR=, INT_CORR=, X-VCAP-REQUEST-ID=, MESSAGE=warningMessage What's the rex syntax to return microService AND warningMessage?
Hi, Recently, we have changed the format of how our winventlog data is being fed to Splunk. From the classic format, we had it changed to XML.  I found out that there is a difference in how dat... See more...
Hi, Recently, we have changed the format of how our winventlog data is being fed to Splunk. From the classic format, we had it changed to XML.  I found out that there is a difference in how data was also extracted when it was in classic versus when it is in XML. One would be for EventCode=4719, I noticed that before, in classic, we used to have Category and Subcategory fields but switching to XML, we are getting CategoryId and SubcategoryId.  The challenge was, that we don't have any means to convert these IDs into their own meaning. I have been looking for any lookup that comes with the Splunk Add-on for Windows in order for me to map these ids but unfortunately can't find one.  I tried to check Microsoft Windows documents but to no avail cannot find how to get the values for these ids. Can anyone help how we can map the CategoryId and SubcategoryId? Thanks in advance!
Hi,   Trying to graph events from a created report and my time field either isn't being recognized, I see 2 date points and I can't use time filters. | inputlookup Reference_Server_Logins.c... See more...
Hi,   Trying to graph events from a created report and my time field either isn't being recognized, I see 2 date points and I can't use time filters. | inputlookup Reference_Server_Logins.csv | append [ search index=Data_2022_login_log type=LoginEvent | search doc.value.deltaCurrency > 0 | eval Server=mvindex(split(mvindex(split(source, "-"), 2), "/"), 0) | stats count by _time, Server | timechart span=1d count by Server] | dedup _time | sort - _time | outputlookup Reference_Server_Logins.csv this is my report search, the normal search works fine and I can graph that however once the data is added to the CSV and I try and add that to a dashboard panel the _time field isn't affected by the date selection field, the graph is showing hours instead of days, and it only shows the 2 earliest values. Messing around creating pivots allows me to see all data but again it's not affected by the filter. Any help would be great. Thanks
I'm removing ex-users from Splunk. I reassigned Knowledge Objects to new users and deleted inactive accounts Now I found as some of Datasets is still associated with Owners, which were already dele... See more...
I'm removing ex-users from Splunk. I reassigned Knowledge Objects to new users and deleted inactive accounts Now I found as some of Datasets is still associated with Owners, which were already deleted. How can I change the Owner for Datasets?   Thanks
I have created a Dashboard using the Dashboard Studio and am trying to find out how to adjust the width of an input field. By googling I was able to find instructions for how to add CSS to the Form i... See more...
I have created a Dashboard using the Dashboard Studio and am trying to find out how to adjust the width of an input field. By googling I was able to find instructions for how to add CSS to the Form in the Classic Dashboard, but have not been able to find out how to do the same using Dashboard Studio. Anyone knows how to achieve this? Also, is there any way to add a Chart, Image or similar to the Right of the Input fields?
I have some data in MySQL , and I have DB Content in Splunk. Now I want import MySQL data into Splunk assets , but I just find how import data from csv files .   I knew this documentation : Collec... See more...
I have some data in MySQL , and I have DB Content in Splunk. Now I want import MySQL data into Splunk assets , but I just find how import data from csv files .   I knew this documentation : Collect and extract asset and identity data in Splunk Enterprise Security - Splunk Documentation  , but I don't know how "Use Splunk DB Connect" for import data .   And , this page is null (v7.0.1) : Define identity formats - Splunk Documentation    PS: Sorry for my bad English.
While using the mvexpand command, i am getting the below error. ERROR -  command.mvexpand: output will be truncated at 1000 results due to excessive memory usage. Memory threshold of 500 MB as co... See more...
While using the mvexpand command, i am getting the below error. ERROR -  command.mvexpand: output will be truncated at 1000 results due to excessive memory usage. Memory threshold of 500 MB as configured in limits.conf /[mvexpand]/max_mem_usage_mb has been reached.   Question 1- How can i resolve the above error ? Question 2 -  Is there any other alternative command of mvexpand ?
We are getting the error below for all indexes, but there is no detail in all search. Rawdata journal is missing in the bucket             clush -w splunk-idx1 /data/splunk/bin/splunk generate-hash... See more...
We are getting the error below for all indexes, but there is no detail in all search. Rawdata journal is missing in the bucket             clush -w splunk-idx1 /data/splunk/bin/splunk generate-hash-files -bucketPath /data/splunk/var/lib/Splunk
I have installed Microsoft Office 365 Reporting Add-on for Splunk and configured with AD app with correct permission. But it keeps quite with 403. Below is the error that we are getting from /opt/spl... See more...
I have installed Microsoft Office 365 Reporting Add-on for Splunk and configured with AD app with correct permission. But it keeps quite with 403. Below is the error that we are getting from /opt/splunk/var/log/splunk/ta_ms_o365_reporting_ms_o365_message_trace_oauth.log     2022-08-15 14:38:06,042 ERROR pid=17034 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 355, in collect_events get_events_continuous(helper, ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 96, in get_events_continuous message_response = get_messages(helper, microsoft_trace_url) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 74, in get_messages raise e File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 66, in get_messages r.raise_for_status() File "/opt/splunk/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2022-08-10T14:38:05.092475Z'%20and%20EndDate%20eq%20datetime'2022-08-10T15:38:05.092475Z'       
We have output of 2 queries in terms of disk usage. One is from DELL and one is rom Huawei index. Dell Query:  |`cluster_overview(isisyd)` | table _time stats.key lnn stats.value | search (stats.... See more...
We have output of 2 queries in terms of disk usage. One is from DELL and one is rom Huawei index. Dell Query:  |`cluster_overview(isisyd)` | table _time stats.key lnn stats.value | search (stats.key="ifs.bytes.avail" OR stats.key="ifs.bytes.used" OR stats.key="ifs.ssd.bytes.free" OR stats.key="ifs.ssd.bytes.used") | eval Usage = case('stats.key'="ifs.bytes.avail","HDD Available",'stats.key'="ifs.bytes.used","HDD Used",'stats.key'="ifs.ssd.bytes.free","SSD Available",'stats.key'="ifs.ssd.bytes.used","SSD Used")  | `bytes_to_gb_tb_pb('stats.value')` |  eval Usage = Usage . " (in GB)" | stats latest(bytes_gb) AS Space by Usage Huawei query:  index="huawei_storage"             | fields DeviceModel,Version,WWN,SN,TotalCapacity,UsableCapacity,UsedCapacity,DataProtection,FreeCapacity             | dedup SN             | table DeviceModel,Version,WWN,SN,TotalCapacity,UsableCapacity,UsedCapacity,DataProtection,FreeCapacity Attached SS suggests the current individual output. End goal is to have one single table combining both Huawei and Dell storage capacity information.  Any help is appreciated.
The question "Which of the following user roles are able to display a report in all apps?" Has no correct answer. I've picked each answer while getting the other questions purposely wrong and recei... See more...
The question "Which of the following user roles are able to display a report in all apps?" Has no correct answer. I've picked each answer while getting the other questions purposely wrong and received 0% on every answer. Could this be fixed because I would love to know what the correct answer is for my Splunk Core Certified User Exam. Please and thank you.
I have a single instance Splunk Enterprise deployment running on Linux. I have a bunch of data feeding into my indexer from a number of Universal Forwarders on the network. My indexer is both indexin... See more...
I have a single instance Splunk Enterprise deployment running on Linux. I have a bunch of data feeding into my indexer from a number of Universal Forwarders on the network. My indexer is both indexing this data and on-forwarding it to a Heavy Forwarder on my network. The Heavy Forwarder then forwards my log data off to a third party system. This has all been working well. I am attempting to configure my Heavy Forwarder so that it forwards it's _internal logs back to my indexer but can't get it working. In order to get the Heavy Forwarder forwarding _internal logs back to my Indexer, I created an app on the Heavy Forwarder /opt/splunk/etc/apps/forward_internal_back2_Indexer. Inside this app I placed the following files: _____________________________________ default/inputs.conf [monitor//$SPLUNK_HOME/var/log/splunk/splunkd.log/splunk/splunkd.log] disabled=0 sourcetype=splunkd index=_internal [monitor//$SPLUNK_HOME/var/log/splunk/splunkd.log/splunk/metrics.log] disabled=0 sourcetype=splunkd index=_internal _____________________________________ default/props.conf [splunkd] TRANSFORMS-routing=routeBack2Indexer _____________________________________ default/transforms.conf [routeBack2Indexer] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=HF_internallogs_to_indexer _____________________________________ default/outputs.conf [tcpout:HF_internallogs_to_indexer] server = <ip_address_of_splunk_indexer>:9997 _____________________________________ Once I had done this I restart splunkd on the Heavy Forwarder, However I can't seem to see _internal logs coming back from my Heavy Forwarder host. would appreciate some help, figuring out where I've gone wrong
The default/props.conf for v1.0.0 of this add-on has a typo.  In the line that starts with "FIELDALIAS-firewall_pkts_in_out", the destination field is currently written as "packtes_in" - it (presumab... See more...
The default/props.conf for v1.0.0 of this add-on has a typo.  In the line that starts with "FIELDALIAS-firewall_pkts_in_out", the destination field is currently written as "packtes_in" - it (presumably) should be "packets_in".  
Hi i have several web servers that work on same host (different port) or different host. the best to say that they are work are use curl command, like below curl -s http://192.168.1.1:8000 | gr... See more...
Hi i have several web servers that work on same host (different port) or different host. the best to say that they are work are use curl command, like below curl -s http://192.168.1.1:8000 | grep login Now question is is there any way to monitor these services without pain in splunk? Any add on(like nmon)? or should writ script to create logfile like below then index it on splunk? TIMESTAMP service 1, up TIMESTAMP service 2, down   any idea?  Thanks 
Hello, I've been working with the add-on python code option for some time now and I find it very useful and easy when it comes to sending events to the splunk(using the ew.write_event() function). Ar... See more...
Hello, I've been working with the add-on python code option for some time now and I find it very useful and easy when it comes to sending events to the splunk(using the ew.write_event() function). Are there other functions, provided by splunk, to create dashboards and panels (such as <object>.create_dashboard()) that I could use (besides using rest API).
i created a custom python api script and it works fine and i want to import in splunk so i put my script. "C:\\Program Files\\Splunk\\etc\\apps\\search\\bin\\sample.py" I run cmd and the result i... See more...
i created a custom python api script and it works fine and i want to import in splunk so i put my script. "C:\\Program Files\\Splunk\\etc\\apps\\search\\bin\\sample.py" I run cmd and the result is getting correctly in splunk i created data inputs -> scripts -> select my scripts -> select source type _json -> app context App Browser -> selected index but i am not getting any json results in splunk search index Is there any configuration needed? when i check input.config it is already correctly the file details, so why splunk index doesn't show any json data? [script://$SPLUNK_HOME\etc\apps\search\bin\sample.py] disabled = false host = home index = jsearch interval = 60.0 sourcetype = _json   
Hey everyone! We're currently in the process of getting ready to deploy a Splunk Cloud instance to migrate our local on-prem version from. Currently, our environment is a hodge-podge of installs, in... See more...
Hey everyone! We're currently in the process of getting ready to deploy a Splunk Cloud instance to migrate our local on-prem version from. Currently, our environment is a hodge-podge of installs, including completely unmanaged universal forwarders, a couple heavy forwarder clusters, and so on. We also have resources both in our local datacenter and in various cloud providers.  I've been of the thought for a while that we should toss the deployment servers into a container environment. I was curious if anyone had experience with doing this? Here's the design I want to build towards: Running at least two instances of Splunk Enterprise, so that we have redundancy and load balancing and can transparently upgrade The instances would not have any indexer or search head functionality, per Splunk's best practices Ideally, the instances would not have any web interfaces, because everything would be code managed All the instances would be configured to talk up to the Splunk Cloud environment as part of their initial deploy All of the instances would use a shared storage location for their apps, including self-configuration for anything beyond the initial setup. This shared storage location would be git-controlled. In an ideal world, the individual Splunk components would not care which deployment server they talked to - they would just check in to a load balanced URI. Now, I know this is massively over-engineering the solution. We've got a couple thousand potential endpoints to manage, so a single standalone deployment server would do the trick. But I want to try this route for two reasons. First, I think it will scale better - especially if I get it agnostic enough that we can use it to deploy to AWS or Azure and get cloud-local deployment servers. Second, and perhaps more importantly, I want to practice and stretch my skills with containers. I've already worked with our cloud team to build out a Splunk Connect for Kubernetes setup in order to monitor our pods and Openshift environment. I want to take this opportunity to learn.
Hello to all friends Because I have very large data, I changed the value of maxresultrows But when I use the dbxquery command, I get the following error. Is there a solution?
Hi there, I am new to splunk and  struggling to join two searches based on conditions .eg. left join  with field 1 from index2  if field1!=" " otherwise  left join with field 2 from index 2. Field ... See more...
Hi there, I am new to splunk and  struggling to join two searches based on conditions .eg. left join  with field 1 from index2  if field1!=" " otherwise  left join with field 2 from index 2. Field 2 is only present in index 2.and Field 1 is common in  I  have two spl  giving right result when executing separately . I don't know how to merge both spl based on above condition to get complete result.  Thanks
I have a windows esxi server and installed splunk on this server and installed "Splunk Add-on for Windows" and created a local file in the Splunk folder and input.config is used Wineventlog is enable... See more...
I have a windows esxi server and installed splunk on this server and installed "Splunk Add-on for Windows" and created a local file in the Splunk folder and input.config is used Wineventlog is enabled and local event log is received in Splunk this server installed "cyberark" also for client to access this esxi server using from "remote desktop connection" Question: Does my local esxi server splunk get an event log to get login details for someone to access my "remote desktop connection"? my event log is currently receiving the local event log and there is no srcip and no port, ip address details and everything is empty Maybe Splunk runs locally and gets a local event log, meaning it doesn't show any ip address and port or srcip sections in the g event? I need to receive if someone accesses my machine from "remote desktop connection" then the event log I want to receive the IP address details is required, do I need to change any input.config to receive the address information IP correctly? Should i create a stanza in input.config to receive the login event log in splunk ?like this example? [WinEventLog:Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXML = false or [WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational] disabled = 0