All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers,   I am trying to achieve below in my search. I have two tables/sources where dates stores are of different format. Source A has two fields with below format. startdate -  2020-0... See more...
Hello Splunkers,   I am trying to achieve below in my search. I have two tables/sources where dates stores are of different format. Source A has two fields with below format. startdate -  2020-07-15 09:30:35 enddate -   2020-07-15 11:30:40 Source B date - 2020-07-15 00:00:00 hour - 10 From Source A I have to extract date and hour separately. For Example, from startdate field , I have to extract date as 2020-07-15 and hour as 09 and from field enddate, date as 2020-07-15 and hour as 12 Using above I am trying to filter events from Source B as below. ...main search | eval startdate=strptime(startdate, “%Y-%m-%d”), enddate=strptime(enddate,”%Y-%m-%d”), date=strptime (date, “%Y-%m-%d”) | where (startdate >= date AND starthour >= hour) AND (enddatew<= date AND endhour <= hour) Above search does not return any events. Your help is highly appreciated.
Hello, I am trying to use xyseries to transform the table and needed to know a way to select all columns as data field for xyseries command. For instance:  I have a table like this: xAxis yAx... See more...
Hello, I am trying to use xyseries to transform the table and needed to know a way to select all columns as data field for xyseries command. For instance:  I have a table like this: xAxis yAxis randomField1 randomField2 abc ghi 1 32 def ghi 22 42   I want the output as follows: xAxis randomField1: ghi randomField2:ghi abc 1 32 def 22 42   I can do this using the following command     xyseries xAxix, yAxis, randomField1, randomField2     But the catch is that the field names and number of fields will not be the same for each search. Meaning, in the next search I might have 3 fields (randomField1, randomField2, randomField3). Is there any way of using xyseries with dynamic field names? Or is there any other command that can be used? Thanks
We would like to ingest the Oracle's UNIFIED_AUDIT_TRAIL table and the SQL server's MSSQL\SQLAudit\*.sqlaudit files. How should we do it? Should we index the Oracle's UNIFIED_AUDIT_TRAIL table? Is t... See more...
We would like to ingest the Oracle's UNIFIED_AUDIT_TRAIL table and the SQL server's MSSQL\SQLAudit\*.sqlaudit files. How should we do it? Should we index the Oracle's UNIFIED_AUDIT_TRAIL table? Is there maybe an add-on? And what should we do on the SQL Server side? Should we read the files themselves?
We have a cloud storage provider setup with Box. Due to organizational requirements we require SSO/SAML auth. Working off their docs: https://developer.box.com/guides/authentication/sso/ This means ... See more...
We have a cloud storage provider setup with Box. Due to organizational requirements we require SSO/SAML auth. Working off their docs: https://developer.box.com/guides/authentication/sso/ This means we can't use a service style account to authenticate as it requires a token on a user side. We have some generic service account that you have to login as admin first, then switch too in Box UI. The docs for the splunk add on for box seem to only allow an oauth2 client token/secret which requires SSO login: https://docs.splunk.com/Documentation/AddOns/released/Box/ConfigurecredentialsonBox Has anyone had luck with another option that would not require the SSO or have another idea for logs ingestion method for something such as Box?  We can link to a user account directly, but then if the user leaves, you have to migrate everything to another account. and we have like 4 or 5 box accounts. ideally we could use JWT or App Token authentication. For other tools that connect to box, we do the auth on the Box side so it just works.
How can I use Splunk to alert / run reports on group member changes? Currently I have something I wrote that reads group members from AD, stores in DB then runs a differential.  Seems like splunk wo... See more...
How can I use Splunk to alert / run reports on group member changes? Currently I have something I wrote that reads group members from AD, stores in DB then runs a differential.  Seems like splunk would be ideal for this. Is there a way to run a search and diff from the previous run?
We converted our application from JBoss to Tomcat and now having heap utilization/ GC problems. Has anyone run across this scenario?
I have installed the Universal Forwarder on my Solaris Global server, but no data is getting to my Indexer.  Looking at the Splunk log file, I can see that the Forwarder is trying to read the current... See more...
I have installed the Universal Forwarder on my Solaris Global server, but no data is getting to my Indexer.  Looking at the Splunk log file, I can see that the Forwarder is trying to read the current day's log file (the not_terminated one), but it can't because the audit log is not a text file.  Is there a way around this?
Hi Have an event like this shown below Today's Greeting Message=Hello|myname|name|is|Alice|myName|is|bob Good bye for today The have a field name Message and its value is Hello|myname|name|is|Alice... See more...
Hi Have an event like this shown below Today's Greeting Message=Hello|myname|name|is|Alice|myName|is|bob Good bye for today The have a field name Message and its value is Hello|myname|name|is|Alice|myName|is|bob How will I get the count of the words based on separator of a specific field value. And adding some new number to the count received and assign it to a new field value. example for above log i will get 8 as count based on | separator(separator count+1).  Now i will add 2 to the count(8+2) and the value to be updated to a new splunk field. This will help in calculating if any event that is crossing the threshold of that value then i can trigger an alert. Some one please help me in getting this.
I have a script that extracts table data from a database and loads the data to other tables in another database.  Everything is written to a single log file, with all values reported for each table i... See more...
I have a script that extracts table data from a database and loads the data to other tables in another database.  Everything is written to a single log file, with all values reported for each table in the log file (e.g. table name, return code, row count, etc).  I'm trying to write the results for each table into a Splunk table, but I'm finding that fields with a single value (e.g. ReturnCode) only print on the first row.    How can I modify this to print the value for every table? 
Hi , I want to see the number of active use cases in splunk and total hosts which is reporting to splunk. Thanks in advance, Kishore
Hi everyone,   "/opt/splunk/var/lib/splunk/msad/datamodel_summary/     " is taking up the most space on the root volume   120G , and last modified date was 2 years ago. can i delete them  ?    Bes... See more...
Hi everyone,   "/opt/splunk/var/lib/splunk/msad/datamodel_summary/     " is taking up the most space on the root volume   120G , and last modified date was 2 years ago. can i delete them  ?    Best, Amir  
What property controls the limit causing this warning, [AD Thread Pool-Global97] 28 Jul 2020 09:52:39,334 WARN BoundsEnforcer - BoundsEnforcer for collection com.singularity.ee.agent.util.bounded.co... See more...
What property controls the limit causing this warning, [AD Thread Pool-Global97] 28 Jul 2020 09:52:39,334 WARN BoundsEnforcer - BoundsEnforcer for collection com.singularity.ee.agent.util.bounded.collections.BoundedConcurrentHashMap with service name TransactionMonitoringService and property name BusinessTransactionRegistry-registeredBTInfos-limit has exceeded maximum size of 55 entries. [AD Thread Pool-Global97] 28 Jul 2020 09:52:39,334 WARN BoundsEnforcer - BoundsEnforcer for collection com.singularity.ee.agent.util.bounded.collections.BoundedConcurrentHashMap with service name TransactionMonitoringService and property name BusinessTransactionRegistry-registeredTransactions-limit has exceeded maximum size of 55 entries.
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. Because the report is sen... See more...
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. Because the report is sent to Splunk everyday, a report with ID =1 will indicate "Open" everyday that it is "Open", so (for example) from 1/1/20, 1/2/2020,...,1/5/20 it is 'Open' and on  1/6/20  it is "Closed". So what I'm trying to do is sum up all unique reportId's for a given month, so from my example it should only return a value of 1 for 'OPEN' and 1 for "Closed' when I sum it up for the month of January.  My current query is below but this counts the number of days a reportId was 'Open' that month. So my query is returning 5 from my example above for 'OPEN'.  The last part of the query I wanted to find the 'Rate" of 'Open'/'Closed' but as you guessed it isn't doing it for all of them and only individually.   index=base | bin _time span=1month | stats count(eval(status="Open")) as OPEN, count(eval(status="Closed")) as CLOSED by reportName, _time, reportId | eval Rate=abs(OPEN/CLOSED) Does anyone have any suggestions with how to solve my problem, any suggestions would be very much appreciated. Thanks in advance. (End goal is a timechart of OPEN, CLOSED, Rate with the x axis as time in months and the y be total number of unique reportIds for that month)
I have a panel on my dashboard that is a list of transactions. I edited the drill-down to link to the search of the transaction when I click on one of the transactions on the panel. However, the sear... See more...
I have a panel on my dashboard that is a list of transactions. I edited the drill-down to link to the search of the transaction when I click on one of the transactions on the panel. However, the search that it links to does not show the transaction successfully because the time range is not set correctly. The search gets the beginning time of the transaction correct, but it sets the end time as only 1 second after the beginning time. How do I change this automatic 1 second interval in the search to a 2 minute interval?
Hi According to https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice Data extracted as a result of search query, whether from the UI or REST API is limited to 5% ... See more...
Hi According to https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice Data extracted as a result of search query, whether from the UI or REST API is limited to 5% of daily ingest for optimal performance.   And Scheduled search is not supported from a hybrid search head.   Let's say I want to fetch over the API (not from a hybrid search head, instead of from a third-party system) 5 min worth of data and I schedule that search to run every minute. I cannot see that that kind of set up would violate the agreement, but I want to make sure. - 5 min worth of data every min will never equal 5% of daily ingest... Anyone who has done a similar setup successfully? Many Thanks Jonas
I am very new to Splunk. Using Universal forwarder I send windows application, security, system, sysmon logs to SSE app. I followed SSE provided all data onboarding guides for indexes , sources, sour... See more...
I am very new to Splunk. Using Universal forwarder I send windows application, security, system, sysmon logs to SSE app. I followed SSE provided all data onboarding guides for indexes , sources, sourcetypes configuration. I successfully run automated introspection in data inventory dashboard. However when i run CIM Compliance Check I get only 2 compliant fields for Microsoft products. SPL searches also fail since I am missing some fieldnames which are provided in Security content. I have TA-windows and TA-sysmon installed in UF and Searched. Logs gets parsed as XML data by these TA If I understand correctly I am missing some CIM datamodel. Could you explain where to find and how to apply the right CIM data model for this app.
I've got an issue where a significant portion of my ingested Log4Net_xml sourcetype logs have duplicate events. I'm currently using a file monitor on a Ubuntu 16.04 LTS, Splunk 7.3.0 machine with a l... See more...
I've got an issue where a significant portion of my ingested Log4Net_xml sourcetype logs have duplicate events. I'm currently using a file monitor on a Ubuntu 16.04 LTS, Splunk 7.3.0 machine with a local Monitor stanza to ingest the logs which looks like this: [monitor:///mnt/CALogs/.../*.log] disabled = 0 host_segment = 4 index = ca sourcetype = log4net_xml #initCrcLength = 1024 # Tried this, it started reindexing all the log file. Our Log4Net configs looks like this: <appender name="RollingFile" type="log4net.Appender.RollingFileAppender"> <file type="log4net.Util.PatternString" value="\\server1\ca\Logs\Application\%property{log4net:HostName}\Application.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maximumFileSize value="5MB" /> <staticLogFileName value="true" /> <maxSizeRollBackups value="10" /> <layout type="log4net.Layout.XMLLayout" /> </appender> I have that directory mounted over SMB/CIFS with the following entry in fstab: //server1/LogsCA /mnt/CALogs cifs username=user,password=Password,domain=somedomain.dev,sec=ntlm 0 0   Can anybody spot what I'm doing wrong?   Here's a picture of the search that I'm using to estimate duplicates:  
I'm having trouble with my code. My goal is when I click A in the radio button, it would show up a list of values under A and there's a checkbox per row...Upon clicking a checkbox, it should show on ... See more...
I'm having trouble with my code. My goal is when I click A in the radio button, it would show up a list of values under A and there's a checkbox per row...Upon clicking a checkbox, it should show on Panel B. (I have that one sorted out)   My problem is, when I pick B in the radio button, and a list shows up in the Details panel, everything I clicked on A and B will show up. I only need B to show up. I need help to reset whenever I click a new radio button and hit submit. I don't know where to reset it though. Thank you in advance!   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc, TableView) { // Access the "default" token model var tokens = mvc.Components.get("default"); var selected_values_array = []; var submittedTokens = mvc.Components.get('submitted'); console.log("This is Multi-select table JS"); // Custom renderer for applying checkbox. var CustomRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _(['Select server']).contains(cell.field); }, render: function ($td, cell) { var a = $('<div>').attr({ "id": "chk-server" + cell.value, "value": cell.value }).addClass('checkbox').click(function () { if ($(this).attr('class') === "checkbox") { selected_values_array.push($(this).attr('value')); $(this).removeClass(); $(this).addClass("checkbox checked"); } else { $(this).removeClass(); $(this).addClass("checkbox"); var i = selected_values_array.indexOf($(this).attr('value')); if (i != -1) { selected_values_array.splice(i, 1); } } console.log(selected_values_array); }).appendTo($td); } }); //List of table ID var sh = mvc.Components.get("myTable"); if (typeof (sh) != "undefined") { sh.getVisualization(function (tableView) { // Add custom cell renderer and force re-render tableView.table.addCellRenderer(new CustomRenderer()); tableView.table.render(); }); } $(document).ready(function () { //setting up tokens with selected value. $("#mybutton").on("click", function (e) { e.preventDefault(); tokens.set("mytoken", selected_values_array.join()); submittedTokens.set(tokens.toJSON()); }); }); });
Hi, I have data with the following attributes: class, user, id, value. I want to execute for value larger than <number> and for the top 5 classes with the maximal quantity of records (ids), the use... See more...
Hi, I have data with the following attributes: class, user, id, value. I want to execute for value larger than <number> and for the top 5 classes with the maximal quantity of records (ids), the user with maximum records for each of those classes. I have this for now (which gives me the first part) :  index="x"  value>1036800 | fields class id value | chart count(id) as 12+_days by class  | sort -12+_days limit=5 example for my expected final results: user 12+_days user_a 215 user_b 35 user_c 65 user_d 656 user_e 695 where each of those users is from different class from the top 5 classes. I tried nested loop with square brackets but nothing gave my the expected result. Thanks.
I have a oneshot command thats returning strange error message. I have everything in [-paramteter value] format. Heres the command:     /opt/splunk/bin/splunk add oneshot host12345.messages -hostn... See more...
I have a oneshot command thats returning strange error message. I have everything in [-paramteter value] format. Heres the command:     /opt/splunk/bin/splunk add oneshot host12345.messages -hostname host12345 –index nix_os –sourcetype syslog       and the response:     Parameters must be in the form '-parameter value'