All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have fields as shown below: _time field1 field2 2020-05-12 40-35-32 A-B-C 2020-05-13 63-28-74 A-B-C need to ch... See more...
I have fields as shown below: _time field1 field2 2020-05-12 40-35-32 A-B-C 2020-05-13 63-28-74 A-B-C need to change the events to be: _time field1 field2 2020-05-12 40 A 2020-05-12 35 B 2020-05-12 32 C 2020-05-13 63 A 2020-05-13 28 B 2020-05-13 74 C Appreciate your help fellows Splunkers
Hi All, I want to fetch the results of this Monday, Last Monday, last to last Monday, the before Monday. I tried this, earliest=-d@w1 latest=-d@w2, it is giving for this week but when I try simil... See more...
Hi All, I want to fetch the results of this Monday, Last Monday, last to last Monday, the before Monday. I tried this, earliest=-d@w1 latest=-d@w2, it is giving for this week but when I try similarly for alst week like earliest=-2d@w1 latest=-2d@w2 it didn't work... Can anyone of you help me with this, please?
I am able to list down the saved searches using below API //servicesNS/-/-/saved/searches// How can i get the execution result of it. I understand that i need to get the sid but I am not able to g... See more...
I am able to list down the saved searches using below API //servicesNS/-/-/saved/searches// How can i get the execution result of it. I understand that i need to get the sid but I am not able to get it. Tried using dispatch(/dispatch) in the URL but it does not work (Error: Invalid custom action for this internal handler (handler: savedsearch, custom action: dispatch, eai action: list). [Adding history to the URL throws error and if i use the normal /services/saved/searches/ , it throws an error that it cannot find the report.] Request help in this problem. Thanks, Santosh
I have the following nested JSON logs: {"statementData": {"overview": [{"value": 19.7780744265071, "dataCode": "rps"}, {"value": 2.82434121706085, "dataCode": "longTermDebtEquity"}, {"value": 0.... See more...
I have the following nested JSON logs: {"statementData": {"overview": [{"value": 19.7780744265071, "dataCode": "rps"}, {"value": 2.82434121706085, "dataCode": "longTermDebtEquity"}, {"value": 0.450856524955893, "dataCode": "grossMargin"}, {"value": -0.262569832402235, "dataCode": "epsQoQ"}, {"value": 22.656256448508, "dataCode": "bvps"}, {"value": 0.471736371842336, "dataCode": "roe"}, {"value": 3.0, "dataCode": "piotroskiFScore"}, {"value": 0.450856524955893, "dataCode": "profitMargin"}, {"value": 0.0591282275797272, "dataCode": "roa"}, {"value": -0.0336046639533605, "dataCode": "revenueQoQ"}, {"value": 0.957170604577976, "dataCode": "currentRatio"}], "cashFlow": [{"value": 1634000000.0, "dataCode": "depamor"}, {"value": -403000000.0, "dataCode": "ncfx"}, {"value": -115000000.0, "dataCode": "ncff"}, {"value": 4476000000.0, "dataCode": "ncfo"}, {"value": 3057000000.0, "dataCode": "ncf"}, {"value": -178000000.0, "dataCode": "investmentsAcqDisposals"}, {"value": -902000000.0, "dataCode": "ncfi"}, {"value": -1440000000.0, "dataCode": "payDiv"}, {"value": 189000000.0, "dataCode": "sbcomp"}, {"value": 0.0, "dataCode": "issrepayEquity"}, {"value": 13000000.0, "dataCode": "businessAcqDisposals"}, {"value": -737000000.0, "dataCode": "capex"}, {"value": 1356000000.0, "dataCode": "issrepayDebt"}, {"value": 3739000000.0, "dataCode": "freeCashFlow"}], "incomeStatement": [{"value": 276000000.0, "dataCode": "opinc"}, {"value": 888000000.0, "dataCode": "shareswa"}, {"value": 1175000000.0, "dataCode": "netinc"}, {"value": 1.32, "dataCode": "eps"}, {"value": 895000000.0, "dataCode": "shareswaDil"}, {"value": 7646000000.0, "dataCode": "opex"}, {"value": 326000000.0, "dataCode": "intexp"}, {"value": 9649000000.0, "dataCode": "costRev"}, {"value": 1.31, "dataCode": "epsDil"}, {"value": 0.0, "dataCode": "prefDVDs"}, {"value": 1000000.0, "dataCode": "netIncDiscOps"}, {"value": 17571000000.0, "dataCode": "revenue"}, {"value": 5955000000.0, "dataCode": "sga"}, {"value": 1625000000.0, "dataCode": "rnd"}, {"value": -1226000000.0, "dataCode": "taxExp"}, {"value": 0.0, "dataCode": "nonControllingInterests"}, {"value": 1175000000.0, "dataCode": "consolidatedIncome"}, {"value": 275000000.0, "dataCode": "ebit"}, {"value": -51000000.0, "dataCode": "ebt"}, {"value": 1909000000.0, "dataCode": "ebitda"}, {"value": 7922000000.0, "dataCode": "grossProfit"}, {"value": 1175000000.0, "dataCode": "netIncComStock"}], "balanceSheet": [{"value": 14497000000.0, "dataCode": "ppeq"}, {"value": 72183000000.0, "dataCode": "intangibles"}, {"value": 19999000000.0, "dataCode": "equity"}, {"value": 38931000000.0, "dataCode": "assetsCurrent"}, {"value": 153403000000.0, "dataCode": "totalAssets"}, {"value": 12969000000.0, "dataCode": "debtCurrent"}, {"value": 114472000000.0, "dataCode": "assetsNonCurrent"}, {"value": 8782000000.0, "dataCode": "taxAssets"}, {"value": 4172000000.0, "dataCode": "acctPay"}, {"value": 133275000000.0, "dataCode": "totalLiabilities"}, {"value": 162626000000.0, "dataCode": "retainedEarnings"}, {"value": 2348000000.0, "dataCode": "taxLiabilities"}, {"value": 92602000000.0, "dataCode": "liabilitiesNonCurrent"}, {"value": 1786000000.0, "dataCode": "inventory"}, {"value": 69453000000.0, "dataCode": "debt"}, {"value": 647000000.0, "dataCode": "investmentsCurrent"}, {"value": 0.0, "dataCode": "deposits"}, {"value": 56484000000.0, "dataCode": "debtNonCurrent"}, {"value": 40673000000.0, "dataCode": "liabilitiesCurrent"}, {"value": 28377000000.0, "dataCode": "acctRec"}, {"value": 2558000000.0, "dataCode": "investments"}, {"value": 1911000000.0, "dataCode": "investmentsNonCurrent"}, {"value": 11370000000.0, "dataCode": "cashAndEq"}, {"value": -29283000000.0, "dataCode": "accoci"}, {"value": 888408023.0, "dataCode": "sharesBasic"}, {"value": 17146000000.0, "dataCode": "deferredRev"}]}, "quarter": 1, "year": 2020, "date": "2020-03-31"}, {"statementData" etc... My props.conf is as follows: [testing] LINE_BREAKER = (\{\"statementData\":\s+) SHOULD_LINEMERGE = false TIME_PREFIX = \{"date":" TIME_FORMAT = %Y-%m-%d TRUNCATE = 80000 INDEXED_EXTRACTIONS = JSON JSON_TRIM_BRACES_IN_ARRAY_NAMES = true KV_MODE = none This allows for a json format that splunk can break down but the fields become a bit mangled. To get down to the dataCode and values, I used the following SPL: index=dev | spath | rename statementData.balanceSheet.dataCode as Data, statementData.balanceSheet.value as Value | eval x=mvzip(Data, Value) | mvexpand x | eval x = split(x,",") |eval Data=mvindex(x,0) | eval Value=mvindex(x,1) | table source, Data, Value This splits the data into two columns but I am having difficulty associating the columns ie. I want rps to equal 19.7780744265071. Is there an easier way to do this or have Splunk recognize the nested json at index time? I could see INDEXED_EXTRACTIONS=json working, but I would need to remove the "value" and "dataCode" fields first which would be far more work.
I have the following data in csv format: date,year,quarter,statementType,dataCode,value 2020-03-31,2020,1,balanceSheet,ppeq,1047418000.0 2020-03-31,2020,1,balanceSheet,acctRec,0.0 2020-03-31... See more...
I have the following data in csv format: date,year,quarter,statementType,dataCode,value 2020-03-31,2020,1,balanceSheet,ppeq,1047418000.0 2020-03-31,2020,1,balanceSheet,acctRec,0.0 2020-03-31,2020,1,incomeStatement,ebt,-20269000.0 2020-03-31,2020,1,incomeStatement,consolidatedIncome,-14061000.0 2020-03-31,2020,1,overview,bvps,12.4058406156063 I am trying to parse these so that dataCode values are the field names and Values remain the values. Using INDEXED_EXTRACTIONS =csv in my props.conf results in ppeq, acctRec being dataCode values and the same for their actual values. I have tried using | extract pairdelim="," kvdelim="," which associates correctly but also adds the date ex. acctRec : 0.0 2020-03-31 I also looked at adding a transforms to parse out the fields using this \d+,\d+,\w+,\w+,(\w+)\,(\S+) but it does not appear that fields can be dynamically assigned and would all have to be specified. Any advice is greatly appreciated.
Hello All, I am trying to set up the connection for HortonWorks using Splunk App for DB connect but getting the "SQL Method not supported" errors while trying to query the hive tables. Splun... See more...
Hello All, I am trying to set up the connection for HortonWorks using Splunk App for DB connect but getting the "SQL Method not supported" errors while trying to query the hive tables. Splunk HF version 8.0 Splunk App_for Db connect 3 HortonWorks hive 3.1 Drivers installed : hive-jdbc-3.1.0.3.1.0.179-1-standalone.jar hadoop-common-3.1.1.3.1.0.179-1.jar hadoop-auth-3.1.1.3.1.0.179-1.jar I was able to make the connection and can see all the schemas and tables but when I select the table I am getting the "SQL Method not supported" error. Please assist! [Hive-Connection] displayName=hive-hortonworks serviceClass=com.splunk.dbx2.DefaultDBX2JDBC jdbcUrlFormat = jdbc:hive2://:/ jdbcUrlSSLFormat = jdbc:hive2://:/ jdbcDriverClass = com.cloudera.hive.jdbc41.HS2Driver jdbcDriverClass = org.apache.hive.jdbc.HiveDriver jdbcDriverClass = org.apache.hadoop.hive.cli.CliDriver port = 10000 [Hive__mte] connection_type = Hive-Connection customizedJdbcUrl = jdbc:hive2://xxxxxxxxxxx:10000/ disabled = 0 enable_query_wrapping = false host = xxxxx database = identity = splnkdb2 jdbcUseSSL = false port = 10000 readonly = false
I'm ingesting data via HEC and I know there is data about it in _introspection, but I don't know what I'm looking at when I search for it. Here is what I know so far. I have a HEC token named tes... See more...
I'm ingesting data via HEC and I know there is data about it in _introspection, but I don't know what I'm looking at when I search for it. Here is what I know so far. I have a HEC token named testing multiple events . The token itself looks like f2584364-976f-4a68-ac3b-4a4d481ec8cd . I'm searching for introspection data about HEC via index=_introspection sourcetype="http_event_collector_metrics" . Some data has been sent to it for testing. Can someone explain what I'm seeing when looking at the entry below? { [-] component: HttpEventCollector data: { [-] format: json num_of_errors: 0 num_of_events: 3 num_of_parser_errors: 0 num_of_requests: 1 num_of_requests_in_mint_format: 0 num_of_requests_to_disabled_token: 0 series: http_event_collector_token token_name: testing multiple events total_bytes_indexed: 72 total_bytes_received: 111 transport: http } datetime: 05-11-2020 10:52:15.827 -0400 log_level: INFO }
I have to report out my job logs which spans from night 9PM to Morning 10AM. I have a field called total_run_time and I want to chart this for the last 15 days. Can someone let me know how to achi... See more...
I have to report out my job logs which spans from night 9PM to Morning 10AM. I have a field called total_run_time and I want to chart this for the last 15 days. Can someone let me know how to achieve it in Splunk? I was able to chart for daily total runtime based on_time , but since my job starts the previous day and ends on the current day, I do not know how to chart it for the last 15 days. I was able to get the total runtime for the last job running from 9PM to 10AM using the earliest and latest command but, I do not know how to chart for 15 days. Can someone help?
I observe that the Rubrik Splunk Add-On v1.0.6 (January, 2020) is not listed as Compatible with Splunk 8. When can we expect that? Is version 1.0.6 Python 3 compatible now?
Why am I not getting results from this search? Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the right hand side | search c_ip=[search | stats sum(... See more...
Why am I not getting results from this search? Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the right hand side | search c_ip=[search | stats sum(bytes_out) as "Total Bytes Out" by c_ip | sort -"Total Bytes Out" | return $c_ip ] Thanks
I tried to segment the log below using \s but it does not work, even after modifying segmenters.conf and props.conf . 2020-05-13 19:27:35,921 INFO com.edifecs.shared.events.transport.rmi.Rmi... See more...
I tried to segment the log below using \s but it does not work, even after modifying segmenters.conf and props.conf . 2020-05-13 19:27:35,921 INFO com.edifecs.shared.events.transport.rmi.RmiBusesPublisher - Failed to obtain a reference to remote EventBus. Connection to rmi://BCKCMD1:1050/EventBus refused. /opt/splunk/etc/apps/search/local/props.conf [test] . . . . . . SEGMENTATION = inner SEGMENTATION-full= inner /opt/splunk/etc/apps/search/local/segmenters.conf [inner] MAJOR = \s MINOR =
I have sourcetype X in Splunk prod and dev. When trying to copy data from prod and ingesting it manually in dev, and selecting sourcetype X, the data is disappearing and there is this error in the se... See more...
I have sourcetype X in Splunk prod and dev. When trying to copy data from prod and ingesting it manually in dev, and selecting sourcetype X, the data is disappearing and there is this error in the set sourcetype option: "No results found. Please change source type, adjust source type settings, or check your source file." Can someone help me figure out why it is happening?
I have a heavy forwarder currently sending data to Splunk Cloud. Can I use the same heavy forwarder to stop data sending to Splunk Cloud and start sending data to on-premises Splunk? If yes, th... See more...
I have a heavy forwarder currently sending data to Splunk Cloud. Can I use the same heavy forwarder to stop data sending to Splunk Cloud and start sending data to on-premises Splunk? If yes, then how?
Hello, I need help fixing an issue with search time field extractions in juniper fw logs (very chatty). The issue isn't actually the props or transforms (I don't believe). They extract the events... See more...
Hello, I need help fixing an issue with search time field extractions in juniper fw logs (very chatty). The issue isn't actually the props or transforms (I don't believe). They extract the events perfectly....."most of the time". But every now and again you click on an event and notice that it's fields haven't been extracted. The events with the missing field extractions are similar in context as the ones that work so no obvious reasons as to why it's not following the rule book of the props and transforms set before them. Doesn't make sense that it works all the time for majority firewalls and only works sometimes for 3. It is noted that within 1 second there's about 113 events (all sharing the same second time stamp) however there's other firewalls that are sending the same amount of events and never have the issue. All firewalls (host) are using the exact same props and transforms.conf files via deployment server sent For example the exact same server that was having the issue before around 1pm isn't having it right now. So I guess it's somewhat sporadic. It's only effecting 3 host out of 22 (now these 3 host are on the top side of events, though there's a 4th on the topside with the same # of events and never has issues). Juniper fw logs > hf/syslog server > index cluster Any thoughts of what to look into as to why for only a few firewalls are having this issue. Thanks
I am trying to filter out noise before it is sent to the indexer. We were using Windows Event Forwarding previously, that was able to filter but now I am trying to create the same filter. I am modi... See more...
I am trying to filter out noise before it is sent to the indexer. We were using Windows Event Forwarding previously, that was able to filter but now I am trying to create the same filter. I am modifying inputs.conf on the server running a Universal Forwarder. So, we are trying to filter out: Event ID = 4688 SubjectLogonId = 0x3e7 (the local system account) AND a list of processes that includes the full path, for example: C:\Windows\System32\SearchFilterHost.exe C:\Windows\SysWOW64\SearchProtocolHost.exe I believe there are 12 or so in total. It seems like this is doable, but it is recommended? If so, how? I have not tested and will have to verify the variable names. blacklist3 = EventCode="4688" SubjectLogonId="0x3e7" NewProcessName="C:\Windows\System32\SearchFilterHost.exe" | "C:\Windows\SysWOW64\SearchProtocolHost.exe" I am thinking it will not be that simple and will need regex for SubjectLogonId AND NewProcessName. I am not sure if that is possible. Thanks in advance for any guidance.
Hello, I have a question about modification of data model in CIM: I would like to add one child dataset to DM "Change". Can I do it by separate application? What I mean exactly: If I create a m... See more...
Hello, I have a question about modification of data model in CIM: I would like to add one child dataset to DM "Change". Can I do it by separate application? What I mean exactly: If I create a modified Change.json file with a new dataset, place it to separate app (eg. my_change_dm ) and place this app to $splunk_home/etc/apps directory - will my modified JSON file merge with Change.json in Splunk_SA_CIM app? Or is there another way to modify DM in CIM without modifying it directly in the Splunk_SA_CIM app? I know that I can modify DM directly in Splunk_SA_CIM , but for some reason I need to make some research. Thank you very much for any info. Regards, Lukas Mecir
Hi, I have an Apache instance with Splunk Forwarder installed that sends logs to Splunk Cloud directly (no heavy forwarders). In the /var/log/httpd/error_logs, we have tons of entries from our... See more...
Hi, I have an Apache instance with Splunk Forwarder installed that sends logs to Splunk Cloud directly (no heavy forwarders). In the /var/log/httpd/error_logs, we have tons of entries from our load balancer to check the status: [Thu May 14 12:11:42.799506 2020] [rewrite:trace2] [pid 26491:tid mod_rewrite.c(470): [client 10.2.35.111:29429] 10.2.35.111 - - [10.2.35.111/sid#559b685a5a10][rid#559b689f9aa0/initial] init rewrite engine with requested uri /en/healthcheck.html How do I exclude this before going to Splunk Cloud Indexer? I tried adding props.conf and transforms.conf under /opt/splunkforwarder/etc/system/local/ but did not work. props.conf [source::/var/log/httpd/error_log] TRANSFORMS-null= setnull transforms.conf [setnull] REGEX = rewrite DEST_KEY = queue FORMAT = nullQueue for REGEX, i also tried healthcheck.html \/en\/healthcheck.html Thanks, Sherwin
I have several questions about data architecture that are rooted in CIM data models and performance considerations. Background: We have about 2T of new log data every day. Some sourcetypes get 10... See more...
I have several questions about data architecture that are rooted in CIM data models and performance considerations. Background: We have about 2T of new log data every day. Some sourcetypes get 100's of M of new events per day, one gets 1.1 B new events per day, quite a few get a few M new events per day. From a data architecture standpoint, we generally drop our events from a given log generator type into a index and sourcetype for the technology, such as windows events go into index = win sourcetype = win. These are not the real names, but you get the idea. When evaluating the CIM data models, windows events span a range of data models, depending on the event type. As an example, Windows events can potentially be a part of the following CIM data models (list not complete) - Alerts Application State Authentication Certificates Inventory etc... Questions: Given that we have massive data considerations and this could adversely affect the performance of any given search, wouldn't it be prudent to create a data architecture that would sort data into smaller piles by index and sourcetype that more closely mimics the CIM data models? Would changing our sourcetype for windows events from sourcetype = win to sourcetype = win-authentication and sourcetype = win-application-state (et. al.) have significant implications on performance and potentially reduce the search target area of a given model from a really big 'pile' to a smaller, more specific 'pile' of event types? Would such a data architecture give noticeably better performance improvements over data model acceleration or in addition to data model acceleration or would it be a wash? Does anyone else out there leverage any data architecture based designs at the index and sourcetype levels for their data due to performance concerns? If so, can you give an example of your data architecture design and ballpark volumes of data? What other considerations may have led you to that data architecture design? Are there any flaws in this line of thinking? Is it potentially too much work to manage when contrasted with potentially small performance gains? Are the performance gains worth the overhead of setting up and maintaining the data architecture?
I have logging set to debug. nothing interesting except that it is pulling in the exact same skip token (100 users) every second nonstop. I have completely removed the input and made another with a... See more...
I have logging set to debug. nothing interesting except that it is pulling in the exact same skip token (100 users) every second nonstop. I have completely removed the input and made another with a new name. But, it does the exact same thing. No errors, just the same graph call every.... single... second...
Hi there, I have a dashboard with several singles with a depends="$token_name$" to display info based on tokens. My understanding is that the "depends" is just for hiding or displaying data and no ... See more...
Hi there, I have a dashboard with several singles with a depends="$token_name$" to display info based on tokens. My understanding is that the "depends" is just for hiding or displaying data and no matter the value of the token, the search associated to the object would still run behind the scenes, is this correct? If so, would it be possible to prevent the search from running unless the $token_name$ exists? in other works is there a "depends" that we could use at the search level? TIA!