All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you very much for your reply. In fact, the returned result is indeed a percentage, and the returned data comes from Siemens PLC. '2398' is actually 23.98%, so I want to convert the result to 2 ... See more...
Thank you very much for your reply. In fact, the returned result is indeed a percentage, and the returned data comes from Siemens PLC. '2398' is actually 23.98%, so I want to convert the result to 2 decimal places and add a percentage sign to the converted decimal.
  I really don't know what to do, all I want is to adopt the security domains that I want   
 Welcome to you engineer I did not understand where to go can you explain to me more I am new to splunk and about two months I am looking for a solution to the problem  
Our scenario in new deployment: One indexer server (Windows) (+one separate Windows server as search head) One SC4S in Linux Two customers One customer with Windows / Linux servers, Win servers ... See more...
Our scenario in new deployment: One indexer server (Windows) (+one separate Windows server as search head) One SC4S in Linux Two customers One customer with Windows / Linux servers, Win servers Security log data sent to Indexer with Universal forwarder installed to all servers, Linux servers sec log data sent to SC4S and then to indexer Second customer with Windows / Linux servers, ESX, NW devices etc. Win servers log data sent to indexer with Universal forwarder installed to all servers, Linux and other sec log data sent to SC4S and then to indexer. Both customers Universal forwarder data coming to the same default port 9997, SC4S sending to 514 Data from customers should be separated to two different indexes Only differentiating thing in these customers is the IP address segments where the data is coming in. I thought, that separating log data according to the sending devices ip- address would be a quite straight forward scenario, but so far I have tested with several options in props / transforms suggested in the community pages and read documentation, and none of the solutions have been successful, all data is deposited to the “main” index. If I put in indexes.conf defaultDB = <index name>, the logs are sent to this index, so the index itself is working and I can do searches in that index, but then all data would go to the same index… What then is the correct way to separate data into two different indexes according to the sending devices IP- address or better still according to IP segment? As I’m really new to Splunk, I do appreciate all advice if somebody here has done something similar and has insight on how to accomplish such a feat.  
index ... | timechart sum(values) span=5m limit=0 by hosts | addtotals | bin _time as day span=1d | streamstats sum(Total) as running reset_on_change=true by day | fields - day Total
First, thank you for using text to illustrate data, and clearly present desired result.  But next time make sure you preserve valid JSON syntax.  Your illustrated text is missing quotation marks requ... See more...
First, thank you for using text to illustrate data, and clearly present desired result.  But next time make sure you preserve valid JSON syntax.  Your illustrated text is missing quotation marks required by JSON.  Correcting for syntax, I assume that the original data would look like   { "location": "US", "all_results": { "serial_a": { "result": "PASS", "version": "123", "data":[ "data1", "data2", "data3" ] }, "serial_b": { "result": "PASS", "version": "456", "data":[ "data4", "data5" ] }, "serial_c": { "result": "FAIL", "version": "789", "data":[ "data6", "data7" ] } } }   This same ask has come several times recently, and there are several ways to do this.  This time, I'll try something new, and less roundabout in logic.   | fields location | spath path=all_results | fields - _* | eval serial_number = json_array_to_mv(json_keys(all_results)) | mvexpand serial_number | eval all_results = json_extract(all_results, serial_number) | spath input=all_results | fields - all_results | rename data{} as data | eval data = mvjoin(data, ",")   You data gives location data result serial_number version US data1,data2,data3 PASS serial_a 123 US data4,data5 PASS serial_b 456 US data6,data7 FAIL serial_c 789 This is an emulation for you to play around and compare with real data   | makeresults | eval _raw = "{ \"location\": \"US\", \"all_results\": { \"serial_a\": { \"result\": \"PASS\", \"version\": \"123\", \"data\":[ \"data1\", \"data2\", \"data3\" ] }, \"serial_b\": { \"result\": \"PASS\", \"version\": \"456\", \"data\":[ \"data4\", \"data5\" ] }, \"serial_c\": { \"result\": \"FAIL\", \"version\": \"789\", \"data\":[ \"data6\", \"data7\" ] } } }" | spath ``` data emulation above ```    
Hi @nabeel652 , did you tried with accum? <your_search> | autoregress status as status_old p=1 | table _time status status_old | where NOT status=status_old | eval NO=1 | accum NO Ciao. Giuseppe
Hi @tuts , go in the ES menu item [Settings > Configure > Contents] choose the related Correlation Search and see in the Notable Section what's the configured Security Domain. probably the Threat ... See more...
Hi @tuts , go in the ES menu item [Settings > Configure > Contents] choose the related Correlation Search and see in the Notable Section what's the configured Security Domain. probably the Threat Security Domain is associated to your Correlation Search and it's bundled in the CS name. In this case you have to clone the CS, using the correct Security Domain and delete the old one. Ciao. Giuseppe
Thank you for the reply I was able to achieve the same with | streamstats reset_on_change=true count by Activity | where count==1 But I want a count field that just increments when it se... See more...
Thank you for the reply I was able to achieve the same with | streamstats reset_on_change=true count by Activity | where count==1 But I want a count field that just increments when it senses a change in status. so I can do my  | stats earliest(_time) as startTime, latest(_time) as endTime by status, count or something like that...
Hi @SplunkDash , let me understand: you have 17 fields in your csv but you want to extract only 11 of them, is it correct? do you want to delete the other fields or only you don't need them in visu... See more...
Hi @SplunkDash , let me understand: you have 17 fields in your csv but you want to extract only 11 of them, is it correct? do you want to delete the other fields or only you don't need them in visualization? if you want to delete the extra fields, you can use SEDCMD to delete the extra fields before indexing. In the second case, you can leave all as is and take only the 11 fields. Ciao. Giuseppe
Hi @BlueQ , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @nabeel652 , if you already extracted the status field, you could try something like this: <your_search> | autoregress status as status_old p=1 | table _time status status_old | where NOT status... See more...
Hi @nabeel652 , if you already extracted the status field, you could try something like this: <your_search> | autoregress status as status_old p=1 | table _time status status_old | where NOT status=status_old Ciao. Giuseppe
Mathematically your question has no solution.  You can use round function to convert a number to 2 decimals.  But how do you "convert" a number into percentage?  Unless values of drive.current.max re... See more...
Mathematically your question has no solution.  You can use round function to convert a number to 2 decimals.  But how do you "convert" a number into percentage?  Unless values of drive.current.max represents a ratio, this makes no sense.  Percentage compared with what?
that solution will work when we have a common field in both, but that's the case here What do you mean?  You don't need "common" field, if by that you mean identical entries.  Consider these two... See more...
that solution will work when we have a common field in both, but that's the case here What do you mean?  You don't need "common" field, if by that you mean identical entries.  Consider these two: IP_add.csv ip 10.110.1.152 10.16.8.11 10.16.8.240 cidr.csv cidr 10.16.8.0/24 If cidr.csv is set up with MATCH_TYPE(cidr), the above search will give you cidr ip match   10.110.1.152 "No Match" 10.16.8.0/24 10.16.8.11 10.16.8.0/24 10.16.8.0/24 10.16.8.240 10.16.8.0/24 Have you tried?
Hello wonderful Splunk community, I have some data where I want count to change only when status changes: Status   Count ------------------- Online      1 Online      1 Online     1 Break ... See more...
Hello wonderful Splunk community, I have some data where I want count to change only when status changes: Status   Count ------------------- Online      1 Online      1 Online     1 Break      2 Break       2 Online       3 Online       3 Lunch       4 Lunch        4 Lunch       4 Offline     5 Offline    5 Any help appreciated. 
@PickleRick, thanks for Your comment - true , Splunk is completely different than RDBMS   For a guy like me who work with Oracle/Mssql/othersDB is like a torture to create suitable "queries" . Anyw... See more...
@PickleRick, thanks for Your comment - true , Splunk is completely different than RDBMS   For a guy like me who work with Oracle/Mssql/othersDB is like a torture to create suitable "queries" . Anyway I need to do some jobs using splunk so I need to look for a help from You.  I am surprised I found a way to link two tables where two columns are keys  - the most ridiculous way (from my point of view) concatenate two strings/keys is correct !   
Hi How to convert the result to a 2-digit decimal and then convert it to a percentage.   index=p1991_m_tiltline_index_json_raw deviceid=8TILG02 |eval DID= "EMS "+substr(deviceid, 7) |timechart l... See more...
Hi How to convert the result to a 2-digit decimal and then convert it to a percentage.   index=p1991_m_tiltline_index_json_raw deviceid=8TILG02 |eval DID= "EMS "+substr(deviceid, 7) |timechart limit=200 span=5m avg(drive.current.max) BY DID  
Honestly, your requirements is a bit vague. How would that work? You want to have a timechart of 5-minute sums by host and additionally for each host a separate series repeating throughout the whole ... See more...
Honestly, your requirements is a bit vague. How would that work? You want to have a timechart of 5-minute sums by host and additionally for each host a separate series repeating throughout the whole day the value of overall sum per host? That will not look well on the graph.
I would  to have a graph so I can see the trend  for a period and have a overlay with the running total for the day Colleague suggested this   index= ...... | timechart sum(values) span=5m by host... See more...
I would  to have a graph so I can see the trend  for a period and have a overlay with the running total for the day Colleague suggested this   index= ...... | timechart sum(values) span=5m by hosts limit=0 | addtotals    But, it doesn't give the running total for day it give the total for the measurement period
Reviving a dead post here, as I'm encountering the same issue as the OP. Splunk will work with the docker command, but when I attempt with compose it get the same error. docker-compose.yml ... See more...
Reviving a dead post here, as I'm encountering the same issue as the OP. Splunk will work with the docker command, but when I attempt with compose it get the same error. docker-compose.yml Error: