All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm trying out the new Splunk dashboard and the goal is to plot users' database document count over time. The log contains a JSON map with the top 100 users with most documents.  Since us... See more...
Hi, I'm trying out the new Splunk dashboard and the goal is to plot users' database document count over time. The log contains a JSON map with the top 100 users with most documents.  Since user doc-count differs over time the keys will also differ... "userDocuments": {    "userA":1836,    "userD":1197,    "userB":606,    "userZ":108062,    "userE":972,    "userC":931  } I'm having a hard time creating a simple table like this User Count userA 1836 userD 1197 userB 606 userZ 108062 userE 972 userC 931 Any input for a query/changing data structure?
I'm working on calculating the storage space taken up by a specific user. I would like to calculate the total size of their search artifacts at any given time - we would like to see if they are hitti... See more...
I'm working on calculating the storage space taken up by a specific user. I would like to calculate the total size of their search artifacts at any given time - we would like to see if they are hitting their storage limits regularly. This will indicate we need to increase the storage limit. I'm running this search to get the jobs for a specific user: | rest /services/search/jobs | search author=<user> The next thing to do would be to calculate the size of all jobs that have not expired yet. I can get the life of the search (ttl field) and when the search started (published field), and a sum of those should give the time the search expires. However, if I check the "Jobs" report, the "Expires" field does not correspond to what I calculate. There must be some additional factor involved in the calculation of the "Expired" field..?
Hi,  I have a log like this : 2021-09-01T07:25:12.314Z id-xxx-xxx-xxx STATE {"Id":"id-xxx-xxx-xxx","timestamp":"2021-09-01T07:25:12.145Z","sourceType":"my_sourcetype","source":"source_name","Type":... See more...
Hi,  I have a log like this : 2021-09-01T07:25:12.314Z id-xxx-xxx-xxx STATE {"Id":"id-xxx-xxx-xxx","timestamp":"2021-09-01T07:25:12.145Z","sourceType":"my_sourcetype","source":"source_name","Type":"my_type","event":{"field":"my_field"},"time":169,"category":"XXX"} My props.conf is like that : [extract_json] TRUNCATE = 999999 SHOULD_LINEMERGE=true NO_BINARY_CHECK=true TIME_PREFIX=timestamp: MAX_TIMESTAMP_LOOKAHEAD=10000 BREAK_ONLY_BEFORE ={$ MUST_BREAK_AFTER=}$ SEDCMD-remove-header = s/^[0-9T\:Z]*.*\s*{/{/g My issue is that I need to extract only the json element from my logs but with those parameters from my props I get a bad extraction : the end of my json ( {"field":"my_field"},"time":169,"category":"XXX"} ) goes to an other event line and is not in json. I have children brackets into parent bracket and I think my SEDCMD is not correct. I would have the entire json element in one event.  Can you help me please ? Thank you !
Hello  I have table  in my dashboard        ID Jan_Target Jan_Actual 1 50 60 2 0 N/A In similar way  for all months now i need a filter.If i select a january it should show b... See more...
Hello  I have table  in my dashboard        ID Jan_Target Jan_Actual 1 50 60 2 0 N/A In similar way  for all months now i need a filter.If i select a january it should show both Jan_Target and Jan_actual  My query |inputlookup ABC |table Id" Jan - Target" "Jan - Actual" "Feb - Target" "Feb - Actual" "Mar - Target" "Mar - Actual" "Apr - Target" "Apr - Actual" "May - Target" "May - Actual" "Jun - Target" "Jun - Actual" "Jul - Target" "Jul - Actual"... I tried  using search Jan*  but no results found  can you please help me with this?
Hi , I have a splunk app in a cluster environment which I need to hide from list of apps from the UI.  To do this, I have made the following changes in SH Deployer: [ui] is_visible = true Thi... See more...
Hi , I have a splunk app in a cluster environment which I need to hide from list of apps from the UI.  To do this, I have made the following changes in SH Deployer: [ui] is_visible = true This works fine, however, I can access the dashboard of this app, if I have the url handy (which is OKAY), the only problem here is, If I try to go to search url of the app, it gives 404 error. Any idea or leads on how to search index /navigate to search window of the app. TIA!
Hey, We do have Sysmom installed on our Windows servers and workstations A quick description of what sysmon is from docs.microsoft.com (link) "sysmon is a Windows system service and device drive... See more...
Hey, We do have Sysmom installed on our Windows servers and workstations A quick description of what sysmon is from docs.microsoft.com (link) "sysmon is a Windows system service and device driver that, once installed on a system, remains resident across system reboots to monitor and log system activity to the Windows event log."   Since Sysmon itself does not offer as a product log analysis, we thought that sending the logs into Splunk would be the right solution here * A disadvantage of this is the need to set up and maintain dozens of Splunk UF's on workstations.... What application do I should install, If we already have deployed Splunk TA Windows? We have found App & addon for sysmon in splunkbase. However, the data Sysmon generates is most of the time windows event logs and perfmon events, so the events would come into indexes generated by the Splunk TA windows. We do not want the information being doubled and collected in both apps under different sourcetypes and indexes.... Have anyone done this before? What's are your reccomendations? Regards, Tankwell
please suggest the add on used to get pharse data for the following devices,   1. Cisco Switches 2. Cisco WLC 3. Cisco Routers 4. Cisco Firewalls/IPS 5. windows AD 6. Windows DNS ... See more...
please suggest the add on used to get pharse data for the following devices,   1. Cisco Switches 2. Cisco WLC 3. Cisco Routers 4. Cisco Firewalls/IPS 5. windows AD 6. Windows DNS 7. Windows DHCP 8. Cisco VOIP 9. Cisco Umberlla 10. Cisco EDR 11. SAP ERP 12. Cloud services - Azure
Hi Guys, I installed Splunk add-on for apache web server on my UF and configured as per the documentation. I am able to see logs in my indexer but facing issue with the "tags". Only "web" and "erro... See more...
Hi Guys, I installed Splunk add-on for apache web server on my UF and configured as per the documentation. I am able to see logs in my indexer but facing issue with the "tags". Only "web" and "error" tags are being generated. No data is displayed when i run data validation search: tag=web tag=inventory tag=activity sourcetype=apache:access OR tag=web tag=inventory tag=activity sourcetype=apache:error   Below are the configuration files : cd /opt/splunkforwarder/etc/apps/Splunk_TA_apache cat inputs.conf -bash-4.2$ cat inputs.conf [monitor:///var/log/httpd/error_log*] sourcetype=apache:error index=webserver disabled = 0 [monitor:///var/log/httpd/access_log*] sourcetype=apache:access:kv index=webserver disabled = 0 I have only one config file "inputs.conf" in the above path. NOTE: I need this app to work fine in order to use it with Splunk ITSI web server module. PLEASE HELP!
Why should data models all be accelerated? What about the built-in Data Models?
Iam having two dashboards, is there a way to display a particular panel of second dashboard through the hyperlink specified in first dashboard? For example, if I click a hyperlink named "more exampl... See more...
Iam having two dashboards, is there a way to display a particular panel of second dashboard through the hyperlink specified in first dashboard? For example, if I click a hyperlink named "more examples" on first dashboard it should navigate to "example panel" of second dashboard. It will be really helpful if anyone provide me a solution on this!
Hi All, We have an index indexA, which gets data from multiple agencies agentA, agentB, agentC, and another index indexB, which has only agentB data. Our requirement is to correlate between indexA ... See more...
Hi All, We have an index indexA, which gets data from multiple agencies agentA, agentB, agentC, and another index indexB, which has only agentB data. Our requirement is to correlate between indexA and indexB to restrict the users to have access only to agentB data from both the indexes indexA and indexB. I tried below options: a. search filter using lookup  as below,  index=indexA  | search [ | inputlookup indexB_lookup.csv | fields indexB_agentB | rename indexB_agentB as indexA_agentB] b. coreleation search as below, (index=indexA  sourcetype=indexA_sourcetype ) OR ( index=indexB sourcetype=indexB_sourcetype) | fields indexA_agentB indexB_agentB sourcetype | eval agentB = coalesce(indexA_agentB,indexB_agentB) | stats dc(sourcetype) as dc_sourcetype values(sourcetype) as sourcetype values(indexA_agentB_raw) values(indexB_agentB_raw) by agentB | where dc_sourcetype=2 But, both the method did not work as the search filter allows the only host, source, source type, event type, search fields. Kindly let me know for any better option to restrict the user access only to agentB data from both the indexes.
Hello everyone! When I searched in the search header, I used the earliest, latest command, but it didn't work. Usually the earliest and latest commands have priority over the time range on the righ... See more...
Hello everyone! When I searched in the search header, I used the earliest, latest command, but it didn't work. Usually the earliest and latest commands have priority over the time range on the right side of the GUI. I ran the same search in the indexer just in case, but the indexer works with the earliest and latest commands. I really want to find a solution to this problem. Thanks.
Hi all, I've noticed that the last Universal Forwarder for FreeBSD available is for 11.3 but FreeBSD is now up to 13.0. I know that FreeBSD is a bit of a dying space except in appliances where tool... See more...
Hi all, I've noticed that the last Universal Forwarder for FreeBSD available is for 11.3 but FreeBSD is now up to 13.0. I know that FreeBSD is a bit of a dying space except in appliances where tools exist to manage log output as syslog on those appliances, ie pfsense, opnsense, Truenas, etc.  Can I reasonably assume then that there is not likely to be any new versions of the Universal Forwarder to support later versions of FreeBSD for this reason, ie FreeBSD is only showing up in appliances that can do syslog forwarding?
Hi all, I'm new to splunk so I hope I'm just missing a step or something. I've searched for a while and still am not sure what I'm doing wrong. I have splunk enterprise running on one server. I hav... See more...
Hi all, I'm new to splunk so I hope I'm just missing a step or something. I've searched for a while and still am not sure what I'm doing wrong. I have splunk enterprise running on one server. I have configured it to receive data via port 9997 through the "Forwarding and Receiving" settings page. I have installed a Universal Forwarder on another server. I added a forward-server (side note: Can you pass in the group name via CLI or is it only editable in the outputs.conf file? I can't find the full options list) and verified it in the /etc/system/local/outputs.conf file. It is using the defaultGroup = default-autolb-group. I then added a monitor on /var/log. The commands: ./splunk add forward-server <host name or ip address>:<listening port> ./splunk add monitor /var/log ./splunk restart   This is where I'm confused. I created an indexer on the Enterprise named 'default-autolb-group' to capture the data but it does not populate this indexer. However, if I go to Apps > Search & Reporting, and filter by index=_internal, I see some info from the server where my universal forwarder is installed. The latest message was after a restart and is listing the cores, RAM, etc. So data is coming through from the server but it's not going where I expect. What am I missing?
I developed a Splunk app and it's on Splunk base. I need to make it compatible for Splunk cloud. What is the criteria for getting the cloud badge on my current app that is on Splunk base? Thank you,... See more...
I developed a Splunk app and it's on Splunk base. I need to make it compatible for Splunk cloud. What is the criteria for getting the cloud badge on my current app that is on Splunk base? Thank you, Marco
I have two regex and want to view both as like new name URLs and both displays on the same column. Is there a way to combine it?
I'm trying to create a query that basically says:   Show me events that contain A, B, C or D where the latest is A or B.   I believe I  could do this with a subsearch: "A" or "B" randomfield=X [... See more...
I'm trying to create a query that basically says:   Show me events that contain A, B, C or D where the latest is A or B.   I believe I  could do this with a subsearch: "A" or "B" randomfield=X [search ("A" or "B" or "C" or "D") randomfield=X | head 1] I know the first part pulls the right data, and the 2nd part pulls the right data, I just can't get them both to return the one result that I want.   I also tried this as a transaction: "A" OR "C"randomfield=X | transaction startswith="A" endswith="C" keepevicted=t | search closed_txn=0 | stats count by randomField   But I realized there are more than just one possible start and one possible end.   I just want to make sure that the LAST result from a list of specific events is a smaller list of specific events.   Thanks!
Greetings:   In search of Cisco sampling logs with the sourctype=cisco_wsa_squid to  sharpen my spl . Can any one point me to a location of such log for download ?
Does anyone provide me some information on doing a lift and shift from on-prem into Azure? Some services will still be on-prem and some would be in Azure. Would we just match our current On-Prem Splu... See more...
Does anyone provide me some information on doing a lift and shift from on-prem into Azure? Some services will still be on-prem and some would be in Azure. Would we just match our current On-Prem Splunk Requirements to Azure resources?  What are some best practices for forwarding data from a hybrid environment? Could we send everything to Azure Monitor and then forward it to Splunk and using index and search head clusters.   The Splunk architecture is using Linux. We have some heavy and universal forwarders sending out data to Splunk.  Thank you!    
Is there a way to list the deployment clients connected to my deployment server that have ever connected to it even once? I've found queries to display it and obviously this information is located on... See more...
Is there a way to list the deployment clients connected to my deployment server that have ever connected to it even once? I've found queries to display it and obviously this information is located on the gui under Forwarder Management > Clients - But, the list of clients resets when you restart splunk on the deployment server. This is making it difficult to keep track of which clients are properly phoning home while I'm making changes to configuration files (which require a splunk restart to go into effect). When splunk is reset, it repopulates the clients list with the ones who are actively phoning home but there is no way to tell if a client should be but isn't because they were erased from this list from the restart. If there is a way that I can find a list of any client that has ever connected, to easier know which clients I have in this environment so far, that would help a lot. Any input at all is appreciated. Thanks !